CN115760979A - Method for realizing shielded target optical calculation imaging based on unmanned aerial vehicle cluster - Google Patents

Method for realizing shielded target optical calculation imaging based on unmanned aerial vehicle cluster Download PDF

Info

Publication number
CN115760979A
CN115760979A CN202211398545.3A CN202211398545A CN115760979A CN 115760979 A CN115760979 A CN 115760979A CN 202211398545 A CN202211398545 A CN 202211398545A CN 115760979 A CN115760979 A CN 115760979A
Authority
CN
China
Prior art keywords
camera
offset
image
target
imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211398545.3A
Other languages
Chinese (zh)
Inventor
熊召龙
葛雨辰
李顺枝
赖作镁
刘杰
向涛
万加龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 10 Research Institute
Original Assignee
CETC 10 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 10 Research Institute filed Critical CETC 10 Research Institute
Priority to CN202211398545.3A priority Critical patent/CN115760979A/en
Publication of CN115760979A publication Critical patent/CN115760979A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a method for realizing shielded target optical calculation imaging based on an unmanned aerial vehicle cluster, which comprises the following steps: obtaining the offset direction and the offset corresponding to each camera in the camera array based on the calibration parameters of the camera array of the unmanned aerial vehicle cluster; guiding all the cameras to the same imaging area as the reference camera based on the offset direction and the offset corresponding to each camera; acquiring online fine calibration parameters of each camera in the camera array in the motion process; the online fine calibration parameters are formed by pixel mapping matrixes. And obtaining an image of the shielding target based on the calibration parameters and the online fine calibration parameters. The invention can adaptively adjust the camera array carried by the unmanned aerial vehicle cluster under the condition that the target is shielded so as to realize optical imaging of the shielded or hidden target.

Description

Method for realizing shielded target optical calculation imaging based on unmanned aerial vehicle cluster
Technical Field
The invention relates to the technical field of calculation imaging of an occluded object, in particular to an implementation method of optical calculation imaging of an occluded target based on an unmanned aerial vehicle cluster.
Background
The optical imaging technology is a technology for projecting a three-dimensional scene on a two-dimensional sensor through an optical imaging system, thereby obtaining optical information associated with the three-dimensional scene. With the development of optical designs and sensor technologies, camera shooting technology has filled every corner of life. However, real-life scenes are three-dimensional, and due to the principle of straight-line propagation of light, conventional cameras cannot acquire optical images of hidden objects. For this purpose, X-ray detection, computational imaging and photon imaging techniques are proposed in succession, each of which makes it possible to image hidden objects under specific conditions. The above method is based on a large amount of correlation calculation of shooting data or has a very high sensitivity requirement on the sensor device, and cannot realize efficient imaging of a hidden object by using visible light.
The technology of calculation and imaging of the sheltered object is often used in the field of synthetic aperture radar detection, has a good imaging effect on scenes such as cloud and fog sheltering by means of the penetrating capability of an electromagnetic wave band adopted by the synthetic aperture radar, and is widely applied in the fields of air defense early warning, situation perception, sea surface monitoring and the like. However, the research and development of the technology for imaging the shielded target in the optical, especially visible light wave band is relatively less. In the field of shielded target optical imaging, an optical synthetic aperture technology represented by light field calculation and integrated imaging calculation provides a new technical approach for realizing shielded target optical imaging.
The light field calculation is used as a processing method of a seven-dimensional light field, the defect of scene depth loss in the imaging process of a single optical lens is overcome, and the light field information of the real world is recorded and analyzed. Because the light field calculation records the angle information of the light rays at the same time, and the light rays are subjected to depth reconstruction by using a light field calculation algorithm, the image corresponding to the imaging lens of focal length, aperture or aperture parameters which do not exist in the synthetic reality can be calculated. The calculation imaging mode gets rid of the limitation that the aperture and the depth of field cannot be obtained in the traditional imaging, can realize the imaging of the ultra-depth of field in the shooting environment with the ultra-large aperture, and in addition, the light field imaging technology based on the synthetic aperture principle has larger optical equivalent aperture, can penetrate through the sparse shielding object and focus on the target object. Although the light field calculation imaging technology realizes the synthesis of the super-reality lens, the camera lens is fixed, and the imaging mode cannot be compatible with the traditional two-dimensional imaging mode, so that the method has a plurality of use limitations in the unmanned aerial vehicle target optical imaging scene needing flexible application.
The integrated imaging calculation is also used as an acquisition mode of the light field information of the three-dimensional scene, and has higher practical value in the fields of true three-dimensional display, three-dimensional reconstruction, synthetic aperture camera calculation imaging and the like. The information of the integrated imaging calculation comes from a micro-image array of the integrated imaging calculation, and the micro-image array is obtained by a micro-lens array, a camera array, computer rendering and the like. In the integrated imaging micro-image array acquisition process, the number of cameras is huge, and the relative spatial position and the optical axis direction between different cameras are difficult to achieve accurate physical alignment. In the traditional integrated imaging camera array shooting, a calibration plate is needed to correct the images shot by the camera array so as to overcome shooting errors caused by the pose difference of different cameras. However, the traditional correction method for images shot by the integrated imaging camera array is limited by the size of a calibration plate, and cannot shoot large and oversized three-dimensional scenes, so that the application range and the practicability of shooting by the integrated imaging camera array are severely limited, the data source of the integrated imaging micro-image array is further limited, and the correction method cannot be applied to unmanned aerial vehicle cluster scenes.
In view of the current situation that a person skilled in the art has no effective solution for imaging an optically shielded target on the basis of a flexible motion platform such as an unmanned aerial vehicle cluster, how to overcome the limitation that an unmanned aerial vehicle photoelectric pod cannot image a shielded or hidden target in various environments enables the unmanned aerial vehicle photoelectric pod to have the capability of imaging the shielded target, the imaging pose requirement is flexible, the computational imaging process is simple and accurate, and the unmanned aerial vehicle photoelectric pod is a hotspot which is widely concerned by the person skilled in the art.
Disclosure of Invention
In order to overcome the limitation that the existing photoelectric pod of the unmanned aerial vehicle cannot image the shielded or hidden target in various environments, the invention provides a method for realizing the optical calculation imaging of the shielded target based on an unmanned aerial vehicle cluster.
The invention discloses a method for realizing shielded target optical calculation imaging based on an unmanned aerial vehicle cluster, which comprises the following steps:
step 1: obtaining a deviation direction and a deviation amount corresponding to each camera in a camera array based on calibration parameters of the camera array of the unmanned aerial vehicle cluster; each unmanned aerial vehicle in the unmanned aerial vehicle cluster carries a camera, and all the cameras carried by the unmanned aerial vehicle cluster form a camera array;
step 2: guiding all the cameras to the same imaging area as a reference camera based on the offset direction and the offset corresponding to each camera, wherein the reference camera is a camera arbitrarily designated in the camera array;
and 3, step 3: acquiring online fine calibration parameters of each camera in the camera array in the motion process; wherein, the online fine calibration parameter is composed of a pixel mapping matrix;
and 4, step 4: and obtaining an image of the shielding target based on the calibration parameters and the online fine calibration parameters.
Further, the step 1 comprises:
step 11: calculating calibration parameters of the camera array; the calibration parameters comprise an imaging external parameter, an imaging internal parameter and an offline rough mapping parameter;
step 12: determining the spatial position relation between the camera array and a shelter target;
step 13: and calculating the offset direction and the offset corresponding to each camera based on the spatial position relation.
Further, the step 11 includes:
the camera array has M × N cameras, the chessboard grid calibration board is shot by the cameras to obtain corresponding M × N calibration parallax images, and the resolution of the calibration parallax images is W r ×H r Detecting and calibrating the coordinates of the checkerboard corner point pixels in the parallax image;
obtaining the imaging external parameters and the imaging internal parameters of the cameras in the mth column and the nth row of the camera array by a Zhang calibration method based on the angular point coordinates in each calibration plate image; wherein the external imaging parameter comprises a rotation matrix R of the camera m,n And a translation vector t m,n The imaging intrinsic parameters comprise an intrinsic parameter matrix K composed of the focal length and the principal point offset of the camera m,n
Calculating a homography transformation matrix corresponding to each camera in the camera array based on a homography transformation principle, and taking the homography transformation matrix as an offline rough mapping parameter of the camera and a reference camera; wherein the homography transformation matrix is:
Figure BDA0003933135400000041
wherein H m,n Is a homography transformation matrix, R 0 For the rotation matrix in the imaging external parameters of the reference camera, M and N are respectively the index values corresponding to the M-th column and N-th row cameras of the camera array, and M belongs to {1,2,3, \8230;, M }, N belongs to {1,2,3, \8230;, N }.
Further, the step 12 includes:
obtaining the distance delta D between the shielded target in the target scene area and the camera array through measurement h (ii) a The spatial distances between every two cameras in the camera array are the same and are all delta C; wherein the target scene area comprises an obstruction and an obstruction target;
according to the size of the checkerboard calibration plate, calculating the range W of the shielded scene shot by the camera array at the plane of the checkerboard calibration plate b ×H b
Further, the offset direction is marked as θ m,n The offset is marked as S m,n And respectively satisfy:
θ m,n =(θ xy ) m,n
S m,n =(S x ,S y ) m,n
wherein, theta x 、θ y Respectively in the offset direction theta m,n Component in the x-and y-axes, S x 、S y Are respectively offset S m,n Components in the x-axis and y-axis;
θ x 、θ y 、S x 、S y respectively satisfy:
Figure BDA0003933135400000051
Figure BDA0003933135400000052
Figure BDA0003933135400000053
Figure BDA0003933135400000054
where round is a function of rounding the value.
Further, the step 2 comprises:
calculating indication guide information of the target scene area relative to each camera coordinate system to provide guide indication information for an unmanned aerial vehicle operator; wherein the indication guide information comprises an azimuth angle Y of the target scene area in a camera coordinate system t Angle of pitch P t
Wherein, the azimuth angle Y t And a pitch angle P t Respectively expressed as:
Y t =arctan((Lon t -Lon p )*cos(Lat p ),(Lat t -Lat p ))
Figure BDA0003933135400000055
wherein, (Lon) p ,Lat p ,Alt p ) As geographical coordinates of the drone platform, by longitude Lon of the drone platform p Latitude Lat p And height Alt p Composition (Lon) t ,Lat t ,Alt t ) Indicating geographic coordinates of the occluding object by the longitude Lon of the occluding object t Latitude Lat t And height Alt t Composition, dist is a ground distance function that calculates two coordinate points.
Further, the step 3 comprises:
performing pixel-level matching on an image shot by each camera in the camera array and a shelter between the reference cameras; wherein the pixel mapping matrix is used for pixel-level matching relationship between each camera and the reference camera
Figure BDA0003933135400000061
And M and N are respectively the index values corresponding to the M column and the N row of the camera array, and M belongs to {1,2,3, \8230;, M }, N belongs to {1,2,3, \8230;, N }.
Further, the pixel level matching process is as follows:
and respectively extracting the feature points of the images needing pixel level matching by utilizing SIFT, SURF or ORB algorithms, matching the feature points of different images, and eliminating the error matching points by utilizing a RANSAC algorithm.
Further, the step 4 comprises:
step 41: acquiring a target scene area image through the camera array;
step 42: calculating to obtain a corrected image corresponding to the target scene area image shot by each camera based on the homography transformation matrix and the pixel mapping matrix;
step 43: calculating to obtain an offset image corresponding to the target scene area image shot by each camera according to the corrected image and the offset direction and the offset amount corresponding to each camera;
step 44: and calculating to obtain an image of the shielding target in the target scene area image based on all the offset images.
Further, the corrected image and the target scene area image shot by the corresponding camera satisfy the following conditions:
I′ m,n (x′,y′)=I m,n (x,y)
wherein:
Figure BDA0003933135400000071
wherein, I' m,n (x ', y') is a corrected image, I m,n (x, y) are target scene area images obtained by the m-th row and the n-th column of the camera array, x and y are pixel coordinates of the target scene area images respectively, and x 'and y' are pixel coordinates of the corrected images respectively;
in said step 43:
the offset image is I' m,n (x”,y”),I″ m,n (x″,y″)=I′ m,n (x′,y′),
Figure BDA0003933135400000072
Where x "and y" are the pixel coordinates of the offset image, S, respectively x Is an offset S m,n The middle direction is theta x Offset of (2), S y Is an offset S m,n The middle direction is theta y The offset of (2);
in said step 44:
the optical image of the occlusion target is:
Figure BDA0003933135400000073
where O (x ", y") is the optical image of the occluding target.
Due to the adoption of the technical scheme, the invention has the following advantages:
(1) Has the capability of imaging the shielded target. The invention enables the unmanned aerial vehicle cluster camera to 'penetrate' the sheltered object and focus on the target object, which is a capability that the traditional photoelectric camera does not have.
(2) The camera has flexible position requirements. The invention provides an unmanned aerial vehicle cluster camera online calibration method which can acquire image data of a target scene during the dynamic working process of the unmanned aerial vehicle cluster camera, dynamically realize image summation and correction on each camera according to scene content, and further provide unmanned aerial vehicle cluster camera online precise calibration parameters for the calculation imaging process of an optical shielding object. The flexibility of the present invention has significant advantages over light field cameras and integrated imaging camera arrays.
(3) The calculation imaging process is simple and accurate. According to the invention, the offset photoelectric image corresponding to the photoelectric image of the target scene area is obtained by using the unmanned aerial vehicle cluster camera, and the optical image of the sheltered target is obtained by calculation.
(4) The imaging depth range is adjustable. In the invention, when the distance between the sheltered object and the photoelectric camera is changed, the same calculation process is repeated, and the sheltered target with changed scene can be calculated and imaged. Compared with the method for imaging the sheltered object based on integrated imaging calculation, the method has the advantages that the initial depth plane does not need to be set, and the imaging depth can be set and adjusted as required.
(5) The invention is suitable for application scenes such as dynamic complex environment, target shielding and hiding conditions, target searching and guiding, photoelectric target identification and confirmation and the like.
Drawings
In order to more clearly illustrate the technical solution in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and it is also possible for a person skilled in the art to obtain other drawings based on these drawings.
FIG. 1 is a scene schematic diagram of an optical calculation imaging method for an occluded target based on an unmanned aerial vehicle cluster according to the present invention;
FIG. 2 is a flow chart of an optical calculation imaging method for an occluded target based on an unmanned aerial vehicle cluster according to the present invention;
FIG. 3 is an online calibration process of the cluster camera of the unmanned aerial vehicle according to the present invention;
FIG. 4 is a process of computational imaging of an optically occluded object according to the present invention;
FIG. 5 is a graph showing the results of imaging an optically occluded object according to an embodiment of the present invention.
Reference numbers: 1 unmanned aerial vehicle 1-4,2 photoelectric camera 1-4 visual field, 3 shelters from the scene of target, 4 shelters from target 1,5 shelters from target 2,6 photoelectric camera 1,7 photoelectric camera 2,8 photoelectric camera 3,9 photoelectric camera 4, 10 photoelectric camera 1 image, 10 photoelectric camera 2 image, 12 photoelectric camera 3 image, 13 benchmark photoelectric camera image, 14 shelters from the target optical computation image.
Detailed Description
The present invention will be further described with reference to the drawings and examples, it being understood that the examples described are only a few examples of the invention and are not intended to be exhaustive. All other embodiments available to those of ordinary skill in the art are intended to be within the scope of the embodiments of the present invention.
The method provided by the invention can be explained by a scene formed by an unmanned aerial vehicle cluster, an unmanned aerial vehicle photoelectric camera, an occluded target and an occluded target, as shown in fig. 1. The unmanned aerial vehicle cluster utilizes the mounted photoelectric camera to perform optical imaging on the target of the region of interest, but because the unmanned aerial vehicle cluster and the target of the region of interest are blocked by a blocked scene, the target of the region of interest becomes a blocked target, the photoelectric camera mounted on the unmanned aerial vehicle cannot completely acquire an optical image of the blocked target, and detailed information of the target cannot be acquired by utilizing a certain unmanned aerial vehicle photoelectric camera. In the scene, the unmanned aerial vehicle cluster is dynamically scattered on one side of the scene for shielding the target, and how to utilize the scattered unmanned aerial vehicle cluster photoelectric camera imaging data to realize optical calculation imaging of the shielded target is the problem scene solved by the invention.
Referring to fig. 2, an embodiment of the present invention includes the following processes:
the method comprises four main processes of unmanned aerial vehicle cluster camera off-line calibration, target scene indication guiding imaging, unmanned aerial vehicle cluster camera on-line calibration and optical shielding object calculation imaging.
In the offline calibration process of the unmanned aerial vehicle cluster camera, firstly, calibration parameters of an unmanned aerial vehicle cluster camera array are calculated. Preliminarily adjusting the unmanned aerial vehicle cluster camera array to ensure that the shooting range of each camera covers the space where the checkerboard calibration plate is positioned, and the distance between the unmanned aerial vehicle cluster camera array and the checkerboard calibration plate is delta D b . In the unmanned aerial vehicle cluster, the number of cameras is MxN, a chessboard grid calibration plate is shot to obtain a corresponding MxN calibration parallax image, and the resolution of the calibration parallax image is W r ×H r And detecting and calibrating the pixel coordinates of the checkerboard corner points in the parallax image. Obtaining the imaging external parameters and the imaging internal parameters of the mth column and the nth row of unmanned aerial vehicle cameras based on the Zhang calibration method, wherein the imaging external parameters comprise a camera rotation matrix R m,n And translation vector t m,n The imaging intrinsic parameters comprise an intrinsic parameter matrix K composed of the focal length of the camera and the deviation of the principal point m,n . The rotation matrix in the reference camera imaging extrinsic parameters is labeled R 0 . The reference camera is a camera which is arbitrarily appointed in the unmanned aerial vehicle cluster camera array. Calculating a homography transformation matrix H corresponding to each camera in the unmanned aerial vehicle cluster camera array based on the homography transformation principle m,n As offline coarse mapping parameters, H, of the drone camera and the reference drone camera m,n Expressed as:
Figure BDA0003933135400000101
wherein m and n are eachThe index values corresponding to the M column and N row cameras of the camera array are M ∈ {1,2,3, \8230;, M }, N ∈ {1,2,3, \8230;, N }. And then, determining the spatial position relation between the unmanned aerial vehicle cluster camera array and the shielding object and the hidden object. Through measurement, the distance delta D between the hidden object in the sheltered scene and the unmanned aerial vehicle cluster camera array is obtained h . The spatial distance between the adjacent cameras in each row and each column in the drone cluster camera array is the same, and is Δ C, as shown in fig. 3. Meanwhile, according to the size of the checkerboard calibration plate, the size W of the shielded scene range shot by the unmanned aerial vehicle cluster camera array at the plane where the checkerboard calibration plate is located is calculated b ×H b . And finally, calculating the offset direction and the offset corresponding to each camera according to the position of each camera in the unmanned aerial vehicle cluster camera array and the spatial relationship between the sheltered scene and the hidden object. Wherein, the offset direction is marked as theta m,n Offset is denoted as S m,n And satisfy, respectively:
θ m,n =(θ xy ) m,n
S m,n =(S x ,S y ) m,n
wherein, theta x 、θ y Respectively in the offset direction theta m,n Component in the x-and y-axes, S x 、S y Are respectively offset S m,n The components in the x-axis and y-axis. Theta.theta. x 、θ y 、S x 、S y Respectively satisfy:
Figure BDA0003933135400000111
Figure BDA0003933135400000112
Figure BDA0003933135400000113
Figure BDA0003933135400000114
where round (, denotes rounding to the nearest integer.
The target scene indication guides the imaging process in order to guide all the cameras to the same imaging area as the reference camera, the process calculates indication guide information of the target scene area relative to each photoelectric camera coordinate system, the indication guide information comprises an azimuth angle Y of the target scene area in the photoelectric camera coordinate system t Angle of pitch P t And the indication guide information is quantitative information and provides visual guide indication information for an unmanned aerial vehicle operator in a visual display mode. Wherein the azimuth angle Y of the target scene area in the photoelectric camera coordinate system t And a pitch angle P t Obtained from a vector formed by combining the geographic coordinates of the target scene area and the photoelectric camera, the slope distance D t Determined by the modulus of the vector. Wherein the azimuth angle Y of the target scene area in the coordinate system of the photoelectric pod camera t And a pitch angle P t Comprises the following steps:
Y t =arctan((Lon t -Lon p )*cos(Lat p ),(Lat t -Lat p ))
Figure BDA0003933135400000121
wherein, (Lon) p ,Lat p ,Alt p ) As geographical coordinates of the drone platform, by longitude Lon of the drone platform p Lat, latitude p And height Alt p Composition of (Lon) t ,Lat t ,Alt t ) Indicating the geographic coordinates of the target by indicating the longitude Lon of the target scene area t Latitude Lat t And height Alt t Composition, dist (A, B) is a function of the distance to the ground at which points A and B are calculated.
The unmanned aerial vehicle cluster camera on-line calibration process refers to the unmanned aerial vehicle cluster camera acquiring a target scene graph in a dynamic working processImage data, according to scene content, dynamically realizing image pair and correction for each camera, and further providing unmanned aerial vehicle cluster camera online fine calibration parameters for the calculation imaging process of the optical shielding object, wherein the online fine calibration parameters are mapped to the matrix by pixels
Figure BDA0003933135400000122
And (4) forming. In the process, the scene of the shielding target in the photoelectric camera image occupies most pixels of the image, and simultaneously, the scene of the shielding target also occupies few pixels of the image as the foreground of the image scene, and the scene of the shielding target occupies the background of the image scene, so that the pixel level matching of the foreground between the images is firstly realized in the on-line calibration process of the unmanned aerial vehicle photoelectric cluster camera. In the invention, pixel-level matching of target scene area images acquired by different unmanned aerial vehicle photoelectric cameras is realized by extracting feature points in the images by using SIFT, SURF or ORB algorithms through a multi-view image matching technology, and eliminating error matching points through RANSAC algorithms, thereby realizing the calculation of the matching relationship between each image and the target scene area image acquired by a reference camera. Pixel mapping matrix for matching relationship between each unmanned aerial vehicle camera and reference camera
Figure BDA0003933135400000123
And representing that M and N are index values corresponding to the M column and N row cameras of the camera array respectively, and M belongs to {1,2,3, \8230;, M }, N belongs to {1,2,3, \8230;, N }.
In the calculation imaging process of the optical shielding object, firstly, a photoelectric image of a target scene area is obtained by using an unmanned aerial vehicle cluster camera, and as shown in fig. 4, the resolution of the photoelectric image of the target scene area is W r ×H r . The photoelectric images of the target scene area shot by the m-th and n-th row cameras are I m,n (x, y), wherein x and y are respectively the photoelectric image pixel coordinates of the target scene area. Using corresponding homography transformation matrix H m,n And a pixel mapping matrix
Figure BDA0003933135400000131
Calculating to obtain a corresponding corrected image I' m,n (x',y'),I' m,n (x ', y') and I m,n (x, y) satisfies:
I′ m,n (x′,y′)=I m,n (x,y)
wherein:
Figure BDA0003933135400000132
then, according to the offset direction theta corresponding to the camera m,n And an offset S m,n Calculating the corresponding offset photo-electric image I " m,n (x”,y”),I” m,n (x ", y") and I' m,n (x ', y') satisfies:
I″ m,n (x″,y″)=I′ m,n (x′,y′)
wherein:
Figure BDA0003933135400000133
where x "and y" are the pixel coordinates of the offset image, S, respectively x Is the offset S m,n The middle direction is theta x Offset of (2), S y Is the offset S m,n The middle direction is theta y The amount of offset of (c).
Preferably, when x "does not satisfy x ∈ {1,2,3, \8230;, W r Either y "does not satisfy y ∈ 1,2,3, \ 8230;, H r And (4) skipping the calculation of the pixel coordinate to avoid the overflow of the calculation range of the pixel coordinate. Finally, using the offset photoelectric image I " m,n (x ", y"), calculating an optical image O (x ", y") of the occlusion target:
Figure BDA0003933135400000134
wherein M ∈ {1,2,3, \8230;, M }, N ∈ {1,2,3, \8230;, N }, and the result of optically calculating the image of the occlusion target is shown in FIG. 5. When the distance Delta D between the shielding object and the photoelectric camera h When the scene changes, the process is repeated, and the sheltered target after the scene changes can be shieldedAnd performing calculation imaging.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are given by way of illustration of the principles of the present invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, and such changes and modifications are within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. An implementation method for shielded target optical calculation imaging based on unmanned aerial vehicle cluster is characterized by comprising the following steps:
step 1: obtaining a deviation direction and a deviation amount corresponding to each camera in a camera array based on calibration parameters of the camera array of the unmanned aerial vehicle cluster; each unmanned aerial vehicle in the unmanned aerial vehicle cluster carries a camera, and all the cameras carried by the unmanned aerial vehicle cluster form a camera array;
step 2: guiding all the cameras to the same imaging area as a reference camera based on the offset direction and the offset corresponding to each camera, wherein the reference camera is any appointed camera in the camera array;
and step 3: acquiring online fine calibration parameters of each camera in the camera array in the motion process; wherein, the online fine calibration parameter is composed of a pixel mapping matrix;
and 4, step 4: and obtaining an image of the shielding target based on the calibration parameters and the online fine calibration parameters.
2. The method of claim 1, wherein step 1 comprises:
step 11: calculating calibration parameters of the camera array; the calibration parameters comprise an imaging external parameter, an imaging internal parameter and an offline rough mapping parameter;
step 12: determining the spatial position relation between the camera array and a shelter target;
step 13: and calculating the offset direction and the offset corresponding to each camera based on the spatial position relation.
3. The method according to claim 2, wherein the step 11 comprises:
the camera array has M multiplied by N cameras in total, the camera array shoots a checkerboard calibration plate to obtain corresponding M multiplied by N calibration parallax images, and the resolution of the calibration parallax images is W r ×H r Detecting and calibrating the coordinates of the checkerboard corner point pixels in the parallax image;
based on the angular point coordinates in each calibration plate image, obtaining the imaging external parameters and the imaging internal parameters of the cameras in the mth column and the nth row of the camera array by a Zhang calibration method; wherein the external imaging parameter comprises a rotation matrix R of the camera m,n And translation vector t m,n The imaging intrinsic parameters comprise an intrinsic parameter matrix K composed of the focal length and the principal point offset of the camera m,n
Calculating a homography transformation matrix corresponding to each camera in the camera array based on a homography transformation principle, and taking the homography transformation matrix as an offline rough mapping parameter of the camera and a reference camera; wherein the homography transformation matrix is:
Figure FDA0003933135390000021
wherein H m,n Is a homography transformation matrix, R 0 For the rotation matrix in the imaging external parameters of the reference camera, M and N are respectively the index values corresponding to the M-th column and N-th row cameras of the camera array, and M belongs to {1,2,3, \8230;, M }, N belongs to {1,2,3, \8230;, N }.
4. The method of claim 3, wherein the step 12 comprises:
obtaining the distance of the sheltered target in the target scene area through measurementDistance Δ D of camera array h (ii) a The spatial distances between every two cameras in the camera array are the same and are all delta C; wherein the target scene area comprises an obstruction and an obstruction target;
according to the size of the checkerboard calibration plate, calculating the range size W of the shielding scene shot by the camera array at the plane where the checkerboard calibration plate is located b ×H b
5. The method of claim 4, wherein the offset direction is expressed as θ m,n The offset is marked as S m,n And satisfy, respectively:
θ m,n =(θ xy ) m,n
S m,n =(S x ,S y ) m,n
wherein, theta x 、θ y Respectively in the offset direction theta m,n Component in the x-and y-axes, S x 、S y Are respectively offset S m,n Components in the x-axis and y-axis;
θ x 、θ y 、S x 、S y respectively satisfy:
Figure FDA0003933135390000031
Figure FDA0003933135390000032
Figure FDA0003933135390000033
Figure FDA0003933135390000034
where round is a function of rounding the numerical value.
6. The method of claim 5, wherein the step 2 comprises:
calculating indication guide information of the target scene area relative to each camera coordinate system to provide guide indication information for an unmanned aerial vehicle operator; wherein the indication guide information comprises an azimuth angle Y of the target scene area in a camera coordinate system t Angle of pitch P t
Wherein, the azimuth angle Y t And a pitch angle P t Respectively expressed as:
Y t =arctan((Lon t -Lon p )*cos(Lat p ),(Lat t -Lat p ))
Figure FDA0003933135390000035
wherein, (Lon) p ,Lat p ,Alt p ) As geographical coordinates of the drone platform, by longitude Lon of the drone platform p Latitude Lat p And height Alt p Composition of (Lon) t ,Lat t ,Alt t ) Indicating geographic coordinates of the occluding object by the longitude Lon of the occluding object t Lat, latitude t And height Alt t Composition, dist is a ground distance function that computes two coordinate points.
7. The method of claim 6, wherein step 3 comprises:
performing pixel-level matching on an image shot by each camera in the camera array and a shelter between the reference cameras; wherein the pixel mapping matrix is used for pixel-level matching relationship between each camera and the reference camera
Figure FDA0003933135390000041
Showing that m and n are respectively corresponding indexes of the m column camera and the n row camera of the camera arrayIndex, M ∈ {1,2,3, \8230;, M }, N ∈ {1,2,3, \8230;, N }.
8. The method of claim 7, wherein the pixel level matching is performed by:
and respectively extracting the feature points of the images needing pixel level matching by utilizing SIFT, SURF or ORB algorithms, matching the feature points of different images, and removing the error matching points by utilizing RANSAC algorithm.
9. The method of claim 7, wherein the step 4 comprises:
step 41: acquiring a target scene area image through the camera array;
step 42: calculating to obtain a corrected image corresponding to the target scene area image shot by each camera based on the homography transformation matrix and the pixel mapping matrix;
step 43: calculating to obtain an offset image corresponding to the target scene area image shot by each camera according to the corrected image and the offset direction and the offset amount corresponding to each camera;
and step 44: and calculating to obtain an image of the shielding target in the target scene area image based on all the offset images.
10. The method according to claim 9, wherein the corrected image and the target scene area image captured by the camera corresponding to the corrected image satisfy:
I′ m,n (x′,y′)=I m,n (x,y)
wherein:
Figure FDA0003933135390000042
wherein, I' m,n (x ', y') is a corrected image, I m,n (x, y) are images of the target scene area obtained by the m-th row and the n-th column of the camera array, and x and y are pixel coordinates of the target scene area image respectivelyThe mark, x 'and y' are the pixel coordinates of the corrected image respectively;
in said step 43:
the offset image is I' m,n (x”,y”),I″ m,n (x″,y″)=I′ m,n (x′,y′),
Figure FDA0003933135390000051
Where x "and y" are the pixel coordinates of the offset image, S, respectively x Is an offset S m,n The middle direction is theta x Offset of (2), S y Is an offset S m,n The middle direction is theta y The offset of (3);
in said step 44:
the optical image of the occlusion target is:
Figure FDA0003933135390000052
where O (x ", y") is the optical image of the occluding target.
CN202211398545.3A 2022-11-09 2022-11-09 Method for realizing shielded target optical calculation imaging based on unmanned aerial vehicle cluster Pending CN115760979A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211398545.3A CN115760979A (en) 2022-11-09 2022-11-09 Method for realizing shielded target optical calculation imaging based on unmanned aerial vehicle cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211398545.3A CN115760979A (en) 2022-11-09 2022-11-09 Method for realizing shielded target optical calculation imaging based on unmanned aerial vehicle cluster

Publications (1)

Publication Number Publication Date
CN115760979A true CN115760979A (en) 2023-03-07

Family

ID=85368542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211398545.3A Pending CN115760979A (en) 2022-11-09 2022-11-09 Method for realizing shielded target optical calculation imaging based on unmanned aerial vehicle cluster

Country Status (1)

Country Link
CN (1) CN115760979A (en)

Similar Documents

Publication Publication Date Title
CN109035320B (en) Monocular vision-based depth extraction method
CN109146980B (en) Monocular vision based optimized depth extraction and passive distance measurement method
CA3040002C (en) A device and method for obtaining distance information from views
US20200128235A1 (en) Camera calibration system, target, and process
US9704265B2 (en) Optical-flow imaging system and method using ultrasonic depth sensing
US9547802B2 (en) System and method for image composition thereof
US8005264B2 (en) Method of automatically detecting and tracking successive frames in a region of interesting by an electronic imaging device
CN110930508B (en) Two-dimensional photoelectric video and three-dimensional scene fusion method
CN109118544B (en) Synthetic aperture imaging method based on perspective transformation
CN106530358A (en) Method for calibrating PTZ camera by using only two scene images
CN109883391B (en) Monocular distance measurement method based on digital imaging of microlens array
CN109255818B (en) Novel target and extraction method of sub-pixel level angular points thereof
CN111145269B (en) Calibration method for external orientation elements of fisheye camera and single-line laser radar
CN113686314B (en) Monocular water surface target segmentation and monocular distance measurement method for shipborne camera
CN113624231A (en) Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft
CN112837207A (en) Panoramic depth measuring method, four-eye fisheye camera and binocular fisheye camera
Xinmei et al. Passive measurement method of tree height and crown diameter using a smartphone
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN113963065A (en) Lens internal reference calibration method and device based on external reference known and electronic equipment
CN111598956A (en) Calibration method, device and system
CN112154484A (en) Ortho image generation method, system and storage medium
CN108537831B (en) Method and device for performing CT imaging on additive manufacturing workpiece
CN116309844A (en) Three-dimensional measurement method based on single aviation picture of unmanned aerial vehicle
CN115760979A (en) Method for realizing shielded target optical calculation imaging based on unmanned aerial vehicle cluster
CN112950727B (en) Large-view-field multi-target simultaneous ranging method based on bionic curved compound eye

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination