CN114545963A - Method and system for optimizing multi-unmanned aerial vehicle panoramic monitoring video and electronic equipment - Google Patents

Method and system for optimizing multi-unmanned aerial vehicle panoramic monitoring video and electronic equipment Download PDF

Info

Publication number
CN114545963A
CN114545963A CN202111566669.3A CN202111566669A CN114545963A CN 114545963 A CN114545963 A CN 114545963A CN 202111566669 A CN202111566669 A CN 202111566669A CN 114545963 A CN114545963 A CN 114545963A
Authority
CN
China
Prior art keywords
image
unmanned aerial
images
splicing
aerial vehicles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111566669.3A
Other languages
Chinese (zh)
Inventor
王佳楠
闭昌瑀
于晨
王春彦
王丹丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202111566669.3A priority Critical patent/CN114545963A/en
Publication of CN114545963A publication Critical patent/CN114545963A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/104Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method, a system and electronic equipment for optimizing multi-unmanned aerial vehicle panoramic monitoring videos. The method for optimizing the multi-unmanned aerial vehicle panoramic monitoring video mainly comprises the steps of controlling the multi-unmanned aerial vehicle meeting the preset image contact ratio to collect a first image; and splicing the first images to obtain the panoramic monitoring videos of the multiple unmanned aerial vehicles. The invention has wide application range, high flexibility and great application potential, can realize the splicing of images shot by different unmanned aerial vehicles, and configures the multi-unmanned aerial vehicle panoramic monitoring system according to the actual task requirement.

Description

Method and system for optimizing multi-unmanned aerial vehicle panoramic monitoring video and electronic equipment
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to a method, a system and electronic equipment for optimizing multi-unmanned aerial vehicle panoramic monitoring videos.
Background
The unmanned aerial vehicle is provided with a power device and a navigation module, and can control the flight through remote control equipment or automatically fly through a set air route. The unmanned aerial vehicle aerial photography system takes an unmanned aerial vehicle as a platform to carry image or video acquisition equipment, and images and videos with high resolution in real time can be acquired through a wireless transmission technology. Compared with satellite remote sensing and aircraft aerial photography, unmanned aerial vehicle aerial photography has the characteristics of low cost, small volume, easy operation and flexibility, is widely applied to the fields of battlefield reconnaissance, emergency rescue and the like, and becomes a powerful supplement for satellite remote sensing. Traditional unmanned aerial vehicle takes photo by plane only uses single unmanned aerial vehicle to go on, under the restriction of factors such as unmanned aerial vehicle shooting angle, camera visual angle, is difficult to form comprehensive cognition or continuously track dynamic target to the scene of shooing.
The rapid development of information technologies such as 5G provides a foundation for real-time transmission of images shot by the unmanned aerial vehicle and communication among multiple unmanned aerial vehicles, and along with the perfect development of the unmanned aerial vehicle cooperative control technology, the design and the realization of a multi-unmanned aerial vehicle system become possible.
How to overcome the limitation of single unmanned aerial vehicle panoramic technology and the development of traditional panoramic monitoring system, realize that many unmanned panoramic monitoring technology is the breakthrough technology of future monitored control system.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method, a system and electronic equipment for optimizing multi-unmanned aerial vehicle panoramic monitoring videos.
In order to achieve the above object, in a first aspect, the present invention provides a method for optimizing a multi-drone panoramic surveillance video, which includes the following steps:
controlling multiple unmanned aerial vehicles meeting the preset image overlap ratio to acquire a first image;
and splicing the first images to obtain the multi-unmanned-aerial-vehicle panoramic monitoring video.
In a second aspect, the present invention provides a system for optimizing multi-drone panoramic surveillance video, including:
the acquisition module is used for controlling the multiple unmanned aerial vehicles meeting the preset image coincidence degree to acquire a first image;
and the splicing module is used for splicing the first images to obtain the multi-unmanned-aerial-vehicle panoramic monitoring video.
In a third aspect, the present invention provides an electronic device, comprising: a memory, a processor;
the memory is used for storing processor executable instructions;
the processor is used for realizing the method for optimizing the multi-unmanned aerial vehicle panoramic monitoring video according to the executable instructions stored in the memory.
In a fourth aspect, the present invention provides a computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, the method for optimizing a multi-drone panoramic surveillance video according to the first aspect is implemented.
The method, the system and the electronic equipment for optimizing the panoramic monitoring videos of the multiple unmanned aerial vehicles have the advantages that:
(1) the multi-unmanned aerial vehicle panoramic monitoring video method has the advantages of wide application range, high flexibility and large application potential, can break through the limitation of terrain, can realize rapid formation of unmanned aerial vehicles and rapid acquisition of a plurality of lenses, and can obtain ideal panoramic monitoring images according to actual requirements according to different formation types of the formation of the unmanned aerial vehicles;
(2) according to the invention, the relative position of the unmanned aerial vehicle is subjected to feedback control by using the image information, so that the overlapping area of the images can be effectively controlled, and the efficiency of the panoramic monitoring video is improved;
(3) according to the invention, multiple unmanned aerial vehicles can carry out self array adjustment according to the image processing result, the quality and the real-time performance of a monitoring picture are balanced, and the workload of manually operating the unmanned aerial vehicles is reduced;
(4) the multi-unmanned-plane panoramic monitoring video method can be applied to various environments, can be widely applied to the civil field, is suitable for large-scale gathering or sports activity safe driving and protecting navigation, can be further expanded to the military field, plays a key role in tactical tasks such as battlefield monitoring and situation assessment, and has important significance in guaranteeing the lives of the masses and national safety.
Drawings
Fig. 1 is a schematic application environment diagram of the method for optimizing multi-drone panoramic surveillance video of the present invention;
fig. 2 is a schematic flow chart of a method of optimizing multi-drone panoramic surveillance video of the present invention;
fig. 3 is a schematic structural diagram of the system for optimizing multi-drone panoramic surveillance video of the present invention;
fig. 4 is a graph of the splicing effect of three unmanned aerial vehicles spliced in real time into a planar panorama in experimental example 1 of the present invention.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity/action from another entity/action without necessarily requiring or implying any actual such relationship or order between such entities/actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of additional identical elements in the process, method, article, or apparatus that comprises the element.
Fig. 1 is a schematic view of an application environment of the method for optimizing a multi-drone panoramic surveillance video according to the present invention. As shown in fig. 1, the system mainly includes multiple drones (two or more drones), a ground station (terminal or server), and a communication device. The method specifically comprises the following steps: the ground station is usually centered on a computer, and is used for processing image information, planning formation matrix, generating control instructions, splicing and displaying images and the like; each unmanned aerial vehicle acquires images according to an onboard sensor (a gyroscope, an accelerometer, a camera and the like) of the unmanned aerial vehicle, and adjusts the position of the unmanned aerial vehicle according to the splicing result of the images; the communication equipment is responsible for realizing communication between the multiple unmanned aerial vehicles and the ground station.
In a first aspect, fig. 2 is a schematic flow chart of the method for optimizing a multi-drone panoramic surveillance video according to the present invention. As shown in fig. 2, the method mainly includes the following steps:
s101, controlling multiple unmanned aerial vehicles meeting the preset image overlap ratio to acquire a first image.
S102, splicing the first images to obtain the multi-unmanned-aerial-vehicle panoramic monitoring video.
In the invention, when the images acquired by the multiple unmanned aerial vehicles meet the preset image overlap ratio, the multiple unmanned aerial vehicles can fly or hover in a fixed array, and the returned first images are regularly spliced to obtain the complete panoramic monitoring video. Wherein the first image may be composed of a plurality of consecutive frames or may be composed of one frame.
Preferably, the process of splicing process further includes: and extracting the characteristic points of the first image.
Specifically, the feature points are important points in the first image, such as contour points, highlight points in darker areas, dark points in lighter areas, and the like.
More preferably, the process of extracting the feature point from the first image includes:
dividing each first image into a plurality of regions;
and extracting the characteristic points of each region in each first image. For example, an ORB algorithm is adopted, and feature points (corner points) of each region are extracted by OFAST corner point detection;
and if the region without the feature points exists, reducing an OFAST corner detection threshold value, so that the feature points can be extracted from the corresponding region.
Specifically, the present invention performs region division according to the actual situation of the first image. The area division may be performed, for example, according to the size of the first image or the number of feature points in the image, and the number of divided areas may be different for each first image. When the size of the first image is smaller or the number of feature points in the image is smaller, fewer regions may be divided, and when the size of the first image is larger or the number of feature points in the image is larger, more regions may be divided.
For example, when there are three first images, if there are fewer feature points in the second first image than in the first and third first images, the number of divided regions in the first and third first images needs to be a little greater than that of the second first image.
In the process, the characteristic points can be extracted from the area with weak texture, the information amount of each first image is increased, and the splicing quality is improved.
Preferably, before the process of extracting the feature points from the first image, the method further includes: the first image is pre-processed.
More preferably, camera correction and image denoising are performed for each first image.
Specifically, the camera is corrected using the existing "Zhang Zhengyou calibration method".
If the first images obtained from the uncorrected camera are directly spliced, the difference between the actual model and the imaging model is large, so that an error is generated in the adjustment process, and the complete first image splicing cannot be realized. Therefore, the camera must be corrected to obtain a distortion coefficient, and the first image is corrected and then registered and fused, so that the efficiency of subsequent registration and fusion is ensured.
Specifically, the existing median filtering method is adopted to denoise the first image, and in order to highlight the filtering result, salt and pepper noise is added into the first image to be filtered.
More preferably, the preprocessing further comprises performing dynamic key frame extraction on the first image.
Specifically, the process of dynamic key frame extraction may include:
selecting frames at preset intervals in the first image as key frames;
and when the at least one key frame has errors, selecting a next frame corresponding to the at least one key frame as the key frame.
For example, the present invention sets a preset interval according to the frame rate of the camera. For example, the frame rate of the camera is 30 frames/second, and assuming that every 10 frames are key frames, the key frames are 1 st, 11 nd, 21... times, and if an error occurs in the 21 st frame, the 22 nd frame is selected as the key frame. And taking the key frame as the object of subsequent processing.
In the step, the number of spliced images can be reduced by extracting the dynamic key frames, so that the operation time is effectively reduced, and the accuracy of a splicing result can be ensured as much as possible.
More preferably, the process of extracting the feature point from the first image further includes:
subdividing the region of each first image by utilizing a quadtree algorithm to obtain a plurality of sub-regions;
illustratively, the picture is first divided into 30 x 30 regions for preliminary OFAST corner detection, and then quad-tree repartitioning is performed in each region for OFAST corner detection.
Stopping the fine division when the number of the sub-regions of each first image is larger than a preset threshold value or the corresponding first image cannot be further finely divided;
and selecting the characteristic point with the best quality from each sub-area so as to ensure the distribution uniformity of the characteristic points.
The feature points are obtained by an OFAST corner detection algorithm, and when the intensity deviation between the feature points (corners) and adjacent pixel points is larger and the deviated points are more, the quality of the feature points is better, otherwise, the quality is worse.
Specifically, the invention can set the preset threshold value according to the actual situation.
In the process, the distribution uniformity of the characteristic points can be improved, and the phenomenon that the characteristic points are too concentrated and piled is prevented.
More preferably, the process of extracting the feature point from the first image further includes: and creating a feature vector as a descriptor for each feature point by adopting a BRIEF algorithm.
Preferably, the process of splicing process further includes: and carrying out coarse matching on the feature points by adopting a nearest neighbor method to obtain registration feature points.
More specifically, the Hamming distance is used for rough matching, and the principle is as follows:
(1) the number of the same elements on bit positions corresponding to the two feature codes is less than 128, and the feature codes are not necessarily registration feature points;
(2) the feature points on one first image and the feature points with the maximum number of the same elements on bit positions corresponding to the feature codes on the other first image are a pair of registration feature points.
Suppose that there are descriptors a-Z of many feature points waiting for registration at this time. Taking a as an example, in the remaining feature point descriptors, assuming that the number of feature codes of 5 feature points is greater than 128, and the number of feature codes of other 18 feature points is less than 128, the registration point can only be in the 5 feature points.
Moreover, for the 5 feature points, assuming that C, D, E, F and G, the element number the same as that of a is the largest, and E is the registration feature point of a.
Due to the difference between the brightness and the exposure, the two images have certain gray level difference, and an obvious boundary is arranged near the splicing line.
Specifically, the process of splicing processing further includes:
calculating a projection transformation matrix of the first images according to the registration characteristic points to obtain a spatial position relation between the first images; namely, the projection transformation matrix of the first image can be calculated by using a least square method or matrix inversion according to the registration characteristic point pairs, so that the spatial position relationship between the first images is obtained.
And fusing the first image by using a gradual change weight method according to the spatial position relation.
In the invention, the positive direction of the Z axis is taken as the right back of the lens, the positive direction of X is taken as the horizontal right movement, and the positive direction of Y is taken as the height upward movement. The first image collected is an X-Y direction image.
More preferably, the weight is obtained by the following formula one:
Figure BDA0003422050100000081
wherein, δ (x)i,yi) Representing the (x) th image of the reference image and the first image to be stitchedi,yi) Each pixel pointTheta represents an included angle between the central line of the reference image and the first image to be spliced and the X direction; x is the number ofmin、xmaxCoordinate values of left and right boundaries in X direction, y, respectively representing the overlapping region of the reference image and the first image to be stitchedmin、ymaxAnd coordinate values of a lower boundary and an upper boundary in the Y direction respectively representing the overlapping region of the reference image and the first image to be spliced, wherein i is an integer greater than or equal to.
In the invention, the reference image may be an acquired standard image, an image obtained by splicing at least two first images, or a first image to be spliced.
More preferably, the image I obtained by fusing the reference image and the first image to be stitched is represented by formula two:
I(xi,yi)=δ(xi,yi)*A(xi,yi)+[1-δ(xi,yi)]*B′(xi,yi) Formula II
I(xi,yi) Denotes the (x) thi,yi) The value (which may be a gray value or an RGB color) of each pixel point in the fused image I; a (x)i,yi) Denotes the (x) thi,yi) The value of each pixel point in the reference image; b' (x)i,yi) Indicates the (x) th after smoothingi,yi) The value of each pixel in the smoothed first image to be stitched, where B' (x)i,yi) Expressed by the formula:
B′(xi,yi)=s*B(xi,yi) + d type III
B(xi,yi) Denotes the (x) thi,yi) The value of each pixel point in the first image to be spliced, s and d are parameters which are artificially set, and parameters can be adjusted according to the splicing effect.
For example, suppose three drones (a to C), a first image a collected by drone a, a first image B collected by drone B, and a first image C collected by drone C are, from left to right in the X direction, a first image a, a first image B, and a first image C in sequence.
In this case, the reference image may be the standard image or the first image a. And when the reference image is the first image A, namely the first image to be spliced is the first image B, splicing the first image A and the first image B according to the first expression to realize the smooth splicing of the first image A slowly transiting from the left overlapping area to the right overlapping area, then continuously splicing the reference image and the first image C according to the first expression to three by taking the first image A and the first image B as the reference image to realize the smooth splicing of the reference image slowly transiting from the left overlapping area to the right overlapping area.
The method can adjust the brightness of the whole spliced picture, ensure that the brightness difference between the spliced overlapped part and the rest part is not large, and further effectively eliminate splicing seam traces in two directions.
δ (x) to be determined by the above methodi,yi) Substituting into formula I (x)i,yi) In the method, each pixel point in the reference image and the first image to be spliced is calculated, and finally the fused image I can be obtained.
In the step, the distribution characteristics of multiple unmanned aerial vehicles are utilized, and the complexity and the calculation load of fusion can be reduced by effectively utilizing local calculation resources.
In a preferred embodiment of the present invention, before step S101, the method further includes:
(1) controlling a plurality of unmanned aerial vehicles to acquire a second image;
the step is mainly that many unmanned aerial vehicles take off after receiving the command of taking off, when many unmanned aerial vehicles reach the height of formation array type, open the camera and gather the second image to pass back the second image to ground satellite station through communication equipment. For example, when three drones reach the height of the formation, the camera of each drone is controlled to capture a second image, where the second image may consist of a succession of frames, or of one frame.
In a preferred embodiment of the present invention, before step (1), initializing multiple drones, including camera calibration and drone time calibration.
Wherein the camera correction can ensure that no blurring ghost or the like is generated when a high-speed moving object exists in the target region.
Specifically, the camera is corrected using the existing "Zhang Zhengyou calibration method".
The unmanned aerial vehicle time calibration is used for synchronizing clocks of a plurality of unmanned aerial vehicles and a ground station and determining a time starting point of data flow. When the ground station receives the original data, the data nodes at the same time are selected as the starting points of the data streams.
In a more preferred embodiment of the present invention, before step (1), the method further comprises setting formation conditions of the multiple drones, including a takeoff command, an initial height design, a height of a formation matrix, and the like.
In the invention, each unmanned aerial vehicle can carry different types of cameras, so that different types of images of the same scene, such as visible light images, infrared images and the like, can be obtained. Therefore, the problem that the definition of the visible light image cannot meet the requirement caused by night or heavy fog weather can be solved, and complete and comprehensive information of special tasks such as maintenance of an electric tower can be obtained. Meanwhile, the cameras can adjust the direction according to the instructions of the ground station, so that the images obtained by the cameras are more accurate.
(2) And splicing the second images, judging whether the splicing result meets the preset image contact ratio, if not, adjusting the multiple unmanned aerial vehicles, so that the second images acquired by the multiple unmanned aerial vehicles meet the preset image contact ratio.
In the present invention, the preset image overlapping degree can be set according to actual conditions. The predetermined image overlap ratio does not exceed a certain value, wherein the predetermined image overlap ratio may be inversely proportional to the number of reference points in the image. For example, the preset image overlap ratio is 20%.
Preferably, the process of stitching the second image is the same as the process of stitching the first image in step S102, and mainly includes:
preprocessing the second image;
extracting a dynamic key frame from the second image;
extracting feature points of the second image;
coarse matching is carried out on the feature points by adopting a nearest neighbor method to obtain registration feature points;
calculating a projection transformation matrix of the second images according to the registration characteristic points to obtain a spatial position relation between the second images;
and fusing the second image by using a gradual change weight method according to the spatial position relation.
The above process is consistent with the stitching process of the first image in step S102, and is not described in detail here.
According to the splicing result of the second image, the image overlapping degree (image overlapping area) of the second image can be obtained, and when the image overlapping degree of the second image does not meet the preset image overlapping degree, the multiple unmanned aerial vehicles need to be adjusted.
Preferably, adjusting the process to many unmanned aerial vehicles mainly includes, adjust the position of many unmanned aerial vehicles and/or adjust the camera visual angle of many unmanned aerial vehicles.
Wherein, adjust many unmanned aerial vehicle's position mainly includes: the second images of many unmanned aerial vehicles passback are handled, obtain the positional information between many unmanned aerial vehicles, fix the position of one of them unmanned aerial vehicle, and all the other unmanned aerial vehicles carry out the adjustment of self position according to the relative position relation with this unmanned aerial vehicle. Further, the remaining drones may be adjusted first in the Y direction and then in the X direction.
Particularly, the second images returned by the multiple unmanned aerial vehicles are processed to obtain the relative position information of the second images projected on the same plane, if the depth of field of the plane is known, the relative position information of the multiple unmanned aerial vehicles in the space can be calculated, one unmanned aerial vehicle is fixed in position, and the positions of the other unmanned aerial vehicles are adjusted according to the relative position information of the unmanned aerial vehicles. Simultaneously, ground station can be according to the many unmanned aerial vehicle's that airborne sensor and positioning system (GPS) feedback positional information that changes, real-time adjustment control command to make many unmanned aerial vehicle realize the concatenation effect utilization ratio maximize.
Suppose that the positive direction of the Z axis is right behind the lens, the positive direction of the X is horizontal right movement, and the direction of the Y is height movement, so as to satisfy the right-hand rule.
At this time, the acquired image is an X-Y direction image. The Y-direction (vertical direction) is adjusted to make the images level, and a mosaic image with random height is avoided. The adjustment in the X direction (left-right direction) is to balance the quality of the stitched image and the picture repetition degree.
The main of many unmanned aerial vehicle position adjustment makes many unmanned aerial vehicle course angle keep unanimous, keeps many unmanned aerial vehicles simultaneously on same straight line, then the interval between the many unmanned aerial vehicles of dynamic adjustment for image contact ratio satisfies preset image contact ratio.
Specifically, the control of the longitudinal position of the unmanned aerial vehicle adopts a traditional PID algorithm, the course angles of a plurality of unmanned aerial vehicles at the initial moment are set, a coordinate transformation matrix is calculated, and the body coordinate system of one unmanned aerial vehicle (such as the unmanned aerial vehicle at the middle position) is taken as a reference coordinate system OBConverting absolute positions of the remaining drones to O by means of a rotating matrixBUnder the coordinate system, make many unmanned aerial vehicles keep at the collinear in vertical through feedback control.
The horizontal position control of unmanned aerial vehicle is based on image overlap ratio and predetermines the image overlap ratio. The actual image contact ratio of each frame of image is calculated, the control instruction of the unmanned aerial vehicle transverse movement is calculated according to the actual image contact ratio and the error between the preset image contact ratio, meanwhile, the control instruction of the unmanned aerial vehicle overall control is calculated and sent to the flight control module in combination with the longitudinal position control, and therefore the images collected by the multiple unmanned aerial vehicles meet the preset image contact ratio.
According to the invention, by using the image information and based on the mode that one unmanned aerial vehicle adjusts the positions of other unmanned aerial vehicles or cameras of other unmanned aerial vehicles, a position feedback module is not required to be additionally arranged, so that the working process can be effectively simplified, the working time is saved, and the task of monitoring videos is efficiently completed.
The method for adjusting the image contact ratio by utilizing the position adjustment of the multiple unmanned aerial vehicles is only suitable for a near scene, when a far scene is needed, the position adjustment of the multiple unmanned aerial vehicles does not have a large effect on expanding the visual field of the spliced image, and the camera visual angles of the multiple unmanned aerial vehicles need to be adjusted at the moment.
Wherein, adjust many unmanned aerial vehicle camera visual angles and mainly include: and processing the second images returned by the multiple unmanned aerial vehicles to obtain the current visual angles of the multiple unmanned aerial vehicle cameras, fixing the position of one unmanned aerial vehicle, and adjusting the camera visual angles of the other unmanned aerial vehicles according to the relative relation with the camera visual angle of the unmanned aerial vehicle. Further, the remaining drone cameras may be adjusted first in the Y direction and then in the X direction.
For example, when many unmanned aerial vehicles were three, the camera visual angle that keeps unmanned aerial vehicle in the middle is unchangeable, through the camera visual angle of feedback control both sides unmanned aerial vehicle to reach and predetermine image overlap ratio.
In a preferred embodiment of the present invention, the method further comprises: and (3) when the multiple unmanned aerial vehicles do not meet the preset image coincidence degree or the preset image coincidence degree changes, repeating the steps (1) to (2).
In a preferred embodiment of the present invention, before step S101, the method further includes: in the flight process of multiple unmanned aerial vehicles, time calibration is carried out on the multiple unmanned aerial vehicles. Namely, the system time of each unmanned aerial vehicle is ensured to be the same as the time of the ground station, so that errors such as fuzzy ghost images and the like can be avoided when high-speed moving objects exist in the target area.
Specifically, firstly, placing a plurality of unmanned aerial vehicles at a known place with overlapped areas of visual fields and fixing;
then, acquiring the position relation among a plurality of images, and splicing the images;
finally, an object (such as a moving small ball) moving at a fixed speed moves in the overlapping area, and the images are spliced in the same way.
At the moment, if time errors exist among the multiple unmanned aerial vehicles, multiple moving objects exist in the splicing result, and the time errors among the multiple airplane systems can be obtained through analysis of the splicing result.
For example, assume that there are two drones collecting two images, and two positions of the object in the stitching result are a and b, respectively; and (4) moving the object, and calculating the accumulated time when a moves to the position of b, wherein the time is the time error between the two images.
The splicing process may be an existing splicing process, or the splicing process of the present invention may be employed.
Preferably, the stitching processing of the present invention is adopted, that is, the stitching processing includes feature point extraction of an image;
coarse matching is carried out on the feature points by adopting a nearest neighbor method to obtain registration feature points;
calculating a projection transformation matrix of the images according to the registration characteristic points to obtain a spatial position relationship between the images;
and fusing the images by using a gradual change weight method according to the spatial position relation.
The specific process is the same as the splicing process of S102, and is not described herein again.
In a second aspect, fig. 3 is a schematic structural diagram of the system for optimizing the panoramic surveillance video of multiple drones according to the present invention. As shown in fig. 3, the system mainly includes:
the acquisition module 201 is used for controlling multiple unmanned aerial vehicles meeting the preset image coincidence degree to acquire a first image;
and the splicing module 202 is used for splicing the first images to obtain the multi-unmanned-aerial-vehicle panoramic monitoring video.
In a preferred embodiment of the present invention, the acquisition module in the system is further configured to control the multiple drones to acquire the second image;
the splicing module is also used for splicing the second image;
the system further comprises a judgment and adjustment module, wherein the judgment and adjustment module judges whether the splicing result meets the preset image contact ratio index, and if not, the multi-unmanned aerial vehicle is adjusted, so that the second images collected by the multi-unmanned aerial vehicle meet the preset image contact ratio.
The system for optimizing the multi-unmanned-aerial-vehicle panoramic monitoring video provided by the invention can be used for executing the method for optimizing the multi-unmanned-vehicle panoramic monitoring video described in any embodiment, the implementation principle and the technical effect are similar, and the description is omitted here.
Preferably, in the system for optimizing multi-drone panoramic surveillance video, the acquisition module 201 and the stitching module 202 may be directly in hardware, in a software module executed by a processor, or in a combination of the two.
A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
The Processor may be a Central Processing Unit (CPU), other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), other Programmable logic devices, discrete Gate or transistor logic, discrete hardware components, or any combination thereof. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In a third aspect, the present invention provides an electronic device, comprising: a memory, a processor;
the memory is used for storing processor executable instructions;
the processor is used for realizing the method for optimizing the multi-unmanned aerial vehicle panoramic monitoring video according to the executable instructions stored in the memory.
In a fourth aspect, the present invention provides a computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, the method for optimizing a multi-drone panoramic surveillance video according to the first aspect is implemented.
In a fifth aspect, the present invention provides a program product comprising a computer program stored in a readable storage medium, from which the computer program can be read by at least one processor, the computer program being executable by the at least one processor to cause the method of optimizing a multi-drone panoramic surveillance video of the first aspect to be performed.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Examples of the experiments
Experimental example 1
Three unmanned aerial vehicles (numbered A-C) fly at Beijing university of science and engineering, wherein the model of a camera of the three unmanned aerial vehicles is ParrotEBOP 2, the camera is 1400 ten thousand pixels, the maximum acquired image resolution is 4096 multiplied by 3072p, and when the height of the three unmanned aerial vehicles reaches 3m, the three unmanned aerial vehicles are controlled to acquire three second images;
splicing the three second images, and judging whether a splicing result meets a preset image overlap ratio, wherein the preset image overlap ratio is that the overlap ratio of the images acquired by the three unmanned aerial vehicles is not more than 20%; if the image overlap ratio is greater than 20%, then adjust three unmanned aerial vehicle's position, specifically be: fixing the unmanned aerial vehicle A, and presetting image overlap ratio to adjust the positions of the unmanned aerial vehicles B and C until second images acquired by the three unmanned aerial vehicles meet the preset image overlap ratio;
controlling three unmanned aerial vehicles meeting the preset image overlap ratio to acquire three first images, and splicing the three first images to obtain a multi-unmanned aerial vehicle panoramic monitoring video by assuming the first image A acquired by the unmanned aerial vehicle A, the first image B acquired by the unmanned aerial vehicle B and the first image C acquired by the unmanned aerial vehicle C, wherein the result is shown in fig. 4; the method specifically comprises the following steps:
performing camera correction and image denoising on the three first images A, B, C respectively;
dividing the first image A, B, C by 30 x 30 according to the size of the first image to obtain a plurality of areas;
extracting a feature point for each region in the first image A, B, C;
if the region without the feature points exists, reducing an OFAST corner detection threshold value to enable the corresponding region to extract the feature points;
then, each region of the first image A, B, C is subdivided by using a quadtree algorithm to obtain a plurality of sub-regions;
selecting the characteristic point with the best quality corresponding to each sub-area;
coarse matching is carried out on the feature points by adopting a nearest neighbor method to obtain registration feature points;
calculating a projective transformation matrix of the first images A, B, C according to the corresponding registration feature points to obtain a spatial position relationship between the first images A, B, C;
according to the spatial position relationship, the first image A, B, C is fused by using a gradual change weight method, wherein the weight is obtained by the following formula one:
Figure BDA0003422050100000181
wherein, δ (x)i,yi) Showing the chi-th image of the reference image and the first image to be stitchedi,yi) The weight value of each pixel point, theta, represents the included angle between the central line of the reference image and the first image to be spliced and the X direction; x is the number ofmin、xmaxCoordinate values y representing the X-direction left and right boundaries of the overlapping region of the reference image and the first image to be stitchedmin、ymaxCoordinate values indicating a lower boundary and an upper boundary in the Y direction of the overlapping region of the reference image and the first image to be stitched, i is an integer equal to or greater than.
And representing an image I obtained by fusing the reference image and the first image to be spliced by the following formula II:
I(xi,yi)=δ(xi,yi)*A(xi,yi)+[1-δ(xi,yi)]*B′(xi,yi) Formula II
I(xi,yi) Denotes the (x) thi,yi) The value of each pixel point in the fused image I; a (x)i,yi) Denotes the (x) thi,yi) The value of each pixel point in the reference image; b' (x)i,yi) Indicates the (x) th after smoothingi,yi) The value of each pixel point in the smoothed first image to be stitched, wherein B' (x)i,yi) Expressed by the formula:
B′(xi,yi)=s*B(xi,yi) + d type III
B(xi,yi) Denotes the (x) thi,yi) The value of each pixel point in the first image to be spliced, and s and d are parameters.
Firstly, a first image A is used as a reference image, a first image B is a first image to be spliced, the first image A and the first image B are spliced according to the first expression to the third expression to realize the smooth splicing of the first image A slowly transiting from the left overlapping area to the right overlapping area, then the first image A and the first image B are used as the reference image, the reference image and the first image C are continuously spliced according to the first expression to the third expression to realize the smooth splicing of the reference image slowly transiting from the left overlapping area to the right overlapping area.
Fig. 4 is a splicing effect diagram of three unmanned aerial vehicles spliced into a planar panorama in real time. The first piece from top to bottom on the left is the splicing effect when three unmanned aerial vehicles are not adjusted in position, the rest two pieces are the splicing effect after three unmanned aerial vehicles are adjusted in position, and the right side is real-time picture repetition. It can be seen from figure 4 that after the position of three unmanned aerial vehicles is adjusted, the image view field after splicing is increased, and a good splicing effect is achieved.
The invention has been described in detail with reference to specific embodiments and illustrative examples, but the description is not intended to limit the invention. Those skilled in the art will appreciate that various equivalent substitutions, modifications or improvements may be made to the technical solution of the present invention and its embodiments without departing from the spirit and scope of the present invention, which fall within the scope of the present invention.

Claims (10)

1. A method for optimizing multi-unmanned aerial vehicle panoramic monitoring videos is characterized by comprising the following steps:
controlling multiple unmanned aerial vehicles meeting the preset image overlap ratio to acquire a first image;
and splicing the first images to obtain the multi-unmanned aerial vehicle panoramic monitoring video.
2. The method for optimizing multi-drone panoramic surveillance video according to claim 1, characterized in that the stitching process includes:
extracting feature points of the first image;
and carrying out coarse matching on the feature points by adopting a nearest neighbor method to obtain registration feature points.
3. The method for optimizing multi-drone panoramic surveillance video according to claim 2, wherein the process of feature point extraction on the first image includes:
dividing each first image to obtain a plurality of areas;
extracting characteristic points of each region in each first image;
and if the region without the feature points exists, reducing the OFAST corner detection threshold value, so that the corresponding region can extract the feature points.
4. The method for optimizing multi-drone panoramic surveillance video of claim 3, further comprising:
subdividing the region of each first image by utilizing a quadtree algorithm to obtain a plurality of sub-regions;
selecting the characteristic point with the best quality corresponding to each sub-area;
preferably, the process of splicing process further includes:
calculating a projection transformation matrix of the first images according to the registration characteristic points to obtain a spatial position relation between the first images;
and fusing the first image by using a gradual change weight method according to the spatial position relation.
5. The method for optimizing multi-UAV panoramic surveillance video according to claim 4, wherein the weight is obtained by formula one:
Figure FDA0003422050090000021
wherein, δ (x)i,yi) (x) th image representing the reference image and the first image to be stitchedi,yi) The weight value of each pixel point, theta, represents the included angle between the central line of the reference image and the first image to be spliced and the X direction; x is the number ofmin、xmaxCoordinate values of left and right boundaries in X direction, y, respectively representing the overlapping region of the reference image and the first image to be stitchedmin、ymaxAnd coordinate values of a lower boundary and an upper boundary in the Y direction of the overlapping region of the reference image and the first image to be spliced are respectively shown.
6. The method for optimizing the multi-drone panoramic surveillance video according to claim 1, further comprising, before controlling the process of acquiring the first image by the multi-drone that satisfies the preset image overlap ratio:
controlling a plurality of unmanned aerial vehicles to acquire a second image;
and splicing the second images, judging whether the splicing result meets the preset image contact ratio, if not, adjusting the multiple unmanned aerial vehicles, so that the second images acquired by the multiple unmanned aerial vehicles meet the preset image contact ratio.
7. The method of optimizing multi-drone panoramic surveillance video of claim 6, wherein the process of adjusting the multi-drone mainly includes:
adjust the position of many unmanned aerial vehicles and/or adjust the camera visual angle of many unmanned aerial vehicles.
8. The utility model provides a system for optimize many unmanned aerial vehicle panorama surveillance video which characterized in that includes:
the acquisition module is used for controlling the multiple unmanned aerial vehicles meeting the preset image coincidence degree to acquire a first image;
and the splicing module is used for splicing the first images to obtain the multi-unmanned-aerial-vehicle panoramic monitoring video.
9. An electronic device, comprising: a memory, a processor;
the memory is to store the processor-executable instructions;
the processor is used for realizing the method for optimizing the multi-drone panoramic surveillance video according to any one of claims 1 to 7 according to the executable instructions stored by the memory.
10. A computer-readable storage medium having stored thereon computer-executable instructions for implementing the method for optimizing multi-drone panoramic surveillance video of any one of claims 1 to 7 when executed by a processor.
CN202111566669.3A 2021-12-20 2021-12-20 Method and system for optimizing multi-unmanned aerial vehicle panoramic monitoring video and electronic equipment Pending CN114545963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111566669.3A CN114545963A (en) 2021-12-20 2021-12-20 Method and system for optimizing multi-unmanned aerial vehicle panoramic monitoring video and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111566669.3A CN114545963A (en) 2021-12-20 2021-12-20 Method and system for optimizing multi-unmanned aerial vehicle panoramic monitoring video and electronic equipment

Publications (1)

Publication Number Publication Date
CN114545963A true CN114545963A (en) 2022-05-27

Family

ID=81669575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111566669.3A Pending CN114545963A (en) 2021-12-20 2021-12-20 Method and system for optimizing multi-unmanned aerial vehicle panoramic monitoring video and electronic equipment

Country Status (1)

Country Link
CN (1) CN114545963A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116048117A (en) * 2022-12-31 2023-05-02 众芯汉创(北京)科技有限公司 Intelligent real-time monitoring system applied to unmanned aerial vehicle
CN117710467A (en) * 2024-02-06 2024-03-15 天津云圣智能科技有限责任公司 Unmanned plane positioning method, unmanned plane positioning equipment and aircraft

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116048117A (en) * 2022-12-31 2023-05-02 众芯汉创(北京)科技有限公司 Intelligent real-time monitoring system applied to unmanned aerial vehicle
CN116048117B (en) * 2022-12-31 2023-10-27 众芯汉创(北京)科技有限公司 Intelligent real-time monitoring system and method applied to unmanned aerial vehicle
CN117710467A (en) * 2024-02-06 2024-03-15 天津云圣智能科技有限责任公司 Unmanned plane positioning method, unmanned plane positioning equipment and aircraft
CN117710467B (en) * 2024-02-06 2024-05-28 天津云圣智能科技有限责任公司 Unmanned plane positioning method, unmanned plane positioning equipment and aircraft

Similar Documents

Publication Publication Date Title
US11748898B2 (en) Methods and system for infrared tracking
US10871258B2 (en) Method and system for controlling gimbal
US11897606B2 (en) System and methods for improved aerial mapping with aerial vehicles
CN108702444B (en) Image processing method, unmanned aerial vehicle and system
CN107659774B (en) Video imaging system and video processing method based on multi-scale camera array
CN109891351B (en) Method and system for image-based object detection and corresponding movement adjustment manipulation
CN105763790A (en) Video System For Piloting Drone In Immersive Mode
CN114545963A (en) Method and system for optimizing multi-unmanned aerial vehicle panoramic monitoring video and electronic equipment
CN106767720A (en) Single-lens oblique photograph measuring method, device and system based on unmanned plane
CN105187723A (en) Shooting processing method for unmanned aerial vehicle
CN106530239B (en) The mobile target low altitude tracking method of miniature self-service gyroplane based on the bionical flake of big visual field
WO2019104641A1 (en) Unmanned aerial vehicle, control method therefor and recording medium
EP2867873B1 (en) Surveillance process and apparatus
WO2018053785A1 (en) Image processing in an unmanned autonomous vehicle
CN109214288B (en) Inter-frame scene matching method and device based on multi-rotor unmanned aerial vehicle aerial video
CN105096284A (en) Method, device and system of generating road orthographic projection image
CN108737743B (en) Video splicing device and video splicing method based on image splicing
WO2021217403A1 (en) Method and apparatus for controlling movable platform, and device and storage medium
CN113271409A (en) Combined camera, image acquisition method and aircraft
WO2021056411A1 (en) Air route adjustment method, ground end device, unmanned aerial vehicle, system, and storage medium
WO2019100214A1 (en) Method, device, and unmanned aerial vehicle for generating output image
Yue et al. An intelligent identification and acquisition system for UAVs based on edge computing using in the transmission line inspection
Huang et al. Research and application of rapid 3D modeling technology based on UAV
CN113252008A (en) Shooting control method for aerial remote sensing narrow-view-field camera
Whitley Unmanned aerial vehicles (UAVs) for documenting and interpreting historical archaeological Sites: Part II—return of the drones

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination