CN115914815A - Airborne panoramic all-round looking device and method - Google Patents

Airborne panoramic all-round looking device and method Download PDF

Info

Publication number
CN115914815A
CN115914815A CN202211406240.2A CN202211406240A CN115914815A CN 115914815 A CN115914815 A CN 115914815A CN 202211406240 A CN202211406240 A CN 202211406240A CN 115914815 A CN115914815 A CN 115914815A
Authority
CN
China
Prior art keywords
camera
image
module
sensor group
panoramic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211406240.2A
Other languages
Chinese (zh)
Inventor
朱飞要
马翼平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avic East China Photoelectric Shanghai Co ltd
Original Assignee
Avic East China Photoelectric Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Avic East China Photoelectric Shanghai Co ltd filed Critical Avic East China Photoelectric Shanghai Co ltd
Priority to CN202211406240.2A priority Critical patent/CN115914815A/en
Publication of CN115914815A publication Critical patent/CN115914815A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The invention relates to an onboard panoramic looking-around device and a method, the device comprises a switch control group, a laser radar sensor group, a camera sensor group and a panoramic view processing and displaying platform, wherein the switch control group is used for controlling the device to be turned on and off, switching application scenes and switching display pictures, the laser radar sensor group is used for acquiring distance and orientation data between a barrier and a target aircraft, the camera sensor group is used for acquiring image data of the surroundings and bottom environment of the target aircraft in real time, the panoramic view processing and displaying platform is used for automatically adjusting the orientation of the sensor group, an auxiliary line is added according to the distance and orientation data acquired by the laser radar sensor group and by combining intrinsic parameter data (such as aircraft shells and the like) in the panoramic view, the barrier is detected in real time and displayed in the panoramic view, and finally the comprehensive panoramic view is displayed on a display.

Description

Airborne panoramic all-round looking device and method
Technical Field
The invention relates to the technical field of airborne graphic image processing, in particular to an airborne panoramic looking-around device and method.
Background
In the existing monitoring method for large aircrafts such as airplanes, a method of monitoring a single visual angle picture by a single camera is adopted more, monitoring personnel are required to judge a monitoring area by themselves, the continuity is poor, the single camera visual angle is limited, large wide-angle environment monitoring cannot be achieved, when monitoring is carried out in a larger area, more cameras are required to be configured, and the monitoring effect benefit shows low cost performance compared with the expenditure.
Due to the special structure of the aircraft, in the takeoff phase of an aircraft driver, particularly in the warehouse and runway running phase, the driver cannot visually observe the surrounding environment condition of the aircraft in all directions, and a visual blind area exists, so that when an emergency happens, the personal activity of the driver is low, and the situation that the emergency cannot be timely perceived and risks cannot be responded exists. This is one of the main causes of a collision accident, and therefore aircraft travel on the ground often requires a plurality of ground crew to direct the cooperation. In addition, when a flight task is executed in extreme weather, outdoor ground crew is dispatched to conduct field command, and the problem that the life safety of the ground crew cannot be guaranteed exists. Therefore, it is necessary to configure a complete panoramic all-around device on the aircraft to reduce the manpower input and improve the safety of the monitoring of the surrounding environment.
The existing panoramic looking-around device is mostly applied to driving tools (such as automobiles and small unmanned planes) with miniaturized structures. The panoramic looking-around device is divided into two types, one type is a distributed panoramic looking-around device, the composition method of the panoramic looking-around device is simpler in camera distribution, and the cameras arranged in the front, back, left and right directions of a driving tool are mostly used for carrying out image splicing to obtain a bird's-eye view panoramic image of the target driving tool. And large aircraft is more complicated structurally, and it has curve and bulge in the appearance more, and its size is huge, compares in small-size driving tools such as cars, requires the camera picture coverage more extensive to the camera is also more nimble in distributing, and image concatenation is not only simple horizontal and vertical integration, needs the image transformation of multi-angle. Another panoramic looking-around device is an integrated panoramic looking-around device, which is formed by tightly combining a plurality of cameras to form a 360-degree annular or spherical device, and is installed on the top of a target driving tool or hung below the target driving tool, so as to obtain a bird's-eye view panoramic image based on the surrounding environment of the target driving tool. The panoramic all-around viewing device composition method is mainly used for obtaining external environment conditions except for a target driving tool, does not pay attention to the target driving tool, is not suitable for a large aircraft, and is not beneficial to observation of a driver on a distant scene because far image obtaining pixels are fuzzy except for the problem of insufficient coverage of a camera.
In addition to the above disadvantages of the existing panoramic looking-around devices, there are the following significant features in installing panoramic looking-around devices on large aircraft: in addition to the need for observing the environment surrounding the target driver, the driver also needs to observe the environment of the bottom of the area covered by the target driver, and needs to pay attention to whether an obstacle exists in the environment of the bottom in the starting stage of road driving. The large aircraft is mostly runway helping hand take-off, is different from the horizontal running of car and small aircraft like unmanned aerial vehicle's vertical take-off, and the large aircraft has the demand of observing the surrounding environment in the preliminary stage of rising to the air.
In many of the existing commercial panoramic viewing devices, a plurality of camera sensors mounted on a driving tool are used for collecting road surface condition image data around a target driving tool and combining pictures of the cameras into a panoramic view. In the process of calibrating the all-round camera, the method used in the step of splicing images of the overlapped area of adjacent cameras is to directly overlap common elements in the independent images with each other to provide a required view, and the method has the problems of visual blind areas, image distortion and the like at seams; on the other hand, because the installation positions of the cameras are different, the cameras are affected by different environments, so that the brightness and the color of the images corresponding to the cameras after all around restoration are different, the spliced images have poor viewing experience, and especially in the driving process of the target driving tool, the brightness difference of the images corresponding to different cameras is more obvious.
The Chinese patent application: CN111145362A discloses an airborne integrated visual system virtual-real fusion display method, which relates to an airborne integrated visual system virtual-real fusion display method and system, and provides the airborne integrated visual system virtual-real fusion display method. The invention can effectively improve the accurate perception of the spatial position forms of the airport runway and the obstacles by the pilot under the condition of low visibility, improve the situational awareness, reduce the typical accidents of controllable flight collision, runway invasion and the like in the process of approaching and landing, and improve the safety of the airplane.
The Chinese patent application: CN211280826U discloses a vehicle-mounted radar and panoramic image enhancement system, including radar and panoramic image system, vehicle-mounted host system, vehicle-mounted display screen and sound system, radar probe and panoramic image enhancement system and radar probe, vehicle-mounted display screen and sound system, vehicle-mounted radar and panoramic image system and vehicle-mounted radar and panoramic image enhancement system are linked respectively to vehicle-mounted host system, and enhancement system CAN judge the running speed value of the car and the distance of the obstacle detected by the radar probe by capturing the CAN signal of B1 and analyzing, and when these two conditions simultaneously satisfy the MCU software setting, vehicle-mounted radar and panoramic image enhancement system send the wake-up signal to radar and panoramic image system through LIN bus, and the system sends relevant information to where host computer, and the display screen displays.
Aiming at the defects of the prior art, the invention combines the pictures of a fisheye camera, a black-and-white camera, an infrared camera, a depth camera and a long-focus camera by a multi-camera technology, and the color camera is combined with the color camera to calculate the depth of field and realize background blurring and refocusing; the color camera is combined with the black-and-white camera to improve the dark light/night scene image shooting quality; the wide-angle lens is combined with the telephoto lens for optical zooming; the color camera and the depth camera are combined for three-dimensional reconstruction, the effects of clearer picture quality, better adaptability to low-light environment, prominent three-dimensional effect of environment and longer viewing distance are achieved, and more excellent visual experience can be obtained in different scenes. In addition, the panoramic all-around view device on the large aircraft is single in using equipment, only a camera is used as image acquisition equipment, and the distance sensing capability of a driver is poor, so that the panoramic all-around view device is combined with a laser radar ranging sensor, and the driver can control the safe distance and master the surrounding environment information of the target aircraft in the ground driving stage and the initial takeoff stage by adding an auxiliary line form in the panoramic all-around view. The airborne panoramic all-round looking device and the method thereof of the invention are not reported at present.
Disclosure of Invention
The invention aims to provide an airborne panoramic all-round looking device and method aiming at the defects of the prior art.
In order to achieve the purpose, the invention adopts the technical scheme that:
in a first aspect, the invention provides an onboard panoramic all-around device, which comprises a switch control group, a laser radar sensor group, a camera sensor group, a panoramic view processing and display platform, wherein the switch control group comprises a power controller, an application scene controller and a picture display control module, and is used for controlling the device to be turned on and off, switching application scenes and switching display pictures; the laser radar sensor group is used for acquiring distance and azimuth data between the obstacle and the target aircraft; the camera sensor group comprises a first camera module and a second camera module and is used for collecting image data of the surroundings and the bottom of the target aircraft in real time; the panoramic view processing and reality platform comprises an image display selection module, a sensor group direction adjusting module, a distance and direction data acquisition module, an image data acquisition module, an equipment synchronization module, an image processing module, a parameter memory, a pivot arithmetic unit and a display, and is used for automatically adjusting the direction of the sensor group.
Further, the application scene controller comprises a bright scene button, a weak scene button and a no-light scene button, wherein the bright scene button controls a first camera module in the camera sensor group to start a fisheye camera, a depth camera and a long-focus camera, and a second camera module starts the fisheye camera, a black-and-white camera, the depth camera and the long-focus camera; the dim light scene button controls a first camera module in the camera sensor group to start a fisheye camera, a black-and-white camera, a depth camera and a long-focus camera, and a second camera module starts the fisheye camera, the black-and-white camera, the depth camera and the long-focus camera; the non-light scene button controls a first camera module in the camera sensor group to start a fisheye camera, an infrared camera and a long-focus camera, and a second camera module starts the fisheye camera, the infrared camera and the long-focus camera.
Further, the screen display control module divides the screen of the touch screen display into a primary interface and a secondary interface, the primary interface displays the panoramic aerial view and the bottom azimuth view of the target aircraft, and the secondary interface displays the panoramic views of the front, the rear, the left, the right and the bottom azimuth of the target aircraft.
Furthermore, the laser radar sensor group is located at the nose, the front edge of the wing and the middle bottom part of the fuselage on the outer edge of the aircraft, and is used for acquiring the distance and the direction of the obstacle and the distance between the target aircraft and the ground.
Further, the image display selection module is used for acquiring signal input of the image display control module in the switch control group, matching the laser radar sensor and the camera sensor equipment number in the corresponding area according to the image display control signal, binding corresponding internal and external parameters of the camera sensor, conversion matrix parameters of the laser radar sensor and orientation arrangement parameters of the sensor group, and transmitting the parameters to the distance and orientation data acquisition module and the image data acquisition module.
The sensor group orientation adjusting module is used for acquiring a switch signal of the switch control group and a rotation parameter and a telescopic parameter in the parameter storage, controlling the position and the orientation of the sensor group, opening the sensor group shielding cover when receiving an opening signal for acquiring the switch control group, extending the sensor group out of the groove to a specified position according to the telescopic parameter, adjusting the vertical angle of the sensor group according to the rotation parameter, retracting the sensor group into the groove when receiving a closing signal for acquiring the switch control group, and sealing the groove by using the shielding cover.
Further, the distance and orientation data acquisition module and the image data acquisition module respectively open corresponding laser radar sensor devices in the laser radar sensor group and corresponding camera sensor devices in the camera sensor group through the device numbers input by the image display selection module, and respectively transmit the acquired distance and orientation data and image data and corresponding device parameters to the device synchronization module.
Further, the device synchronization module is configured to output a thread for the laser radar sensor group to acquire a distance and a direction and a thread for different camera sensors in the camera sensor group to acquire an image to the image processing module synchronously.
Further, the image processing module is used for processing images acquired by the camera sensor, fusing distance and direction data and image data acquired by the laser radar sensor, synthesizing panoramic images, and adding auxiliary lines and marking obstacle early warning areas; the parameter memory is used for storing parameters inside and outside the camera, the equipment number and the position number of each azimuth sensor, the telescopic parameters and the rotating parameters of each sensor group, the conversion matrix parameters of a laser radar sensor for acquiring a data point world coordinate system and converting the data point world coordinate system into an image pixel coordinate system, the image bottom-overlapping projection matrix parameters of each azimuth multi-type camera, the height parameters of a target aircraft from the ground before rising, the safe flying height parameters of the target aircraft, the obstacle identification error distance parameters, the height error distance parameters of the driving-away ground, the depth auxiliary lines, the horizontal distance auxiliary lines and the pixel matrix parameters of the aircraft contour auxiliary lines in a preset image pixel coordinate system; the central arithmetic unit is used for realizing related arithmetic in the image processing module; the display is used for displaying the processed comprehensive panoramic image view.
In a second aspect, the present invention provides a method for panoramic looking around by using the apparatus as described in the first aspect, comprising the following steps:
(1) Arranging a sensor group, acquiring, storing and calculating parameters:
the laser radar sensor group is divided into two types, one type is a sensor group formed by 3 laser radar sensors and a camera sensor, and the other type is a sensor group formed by a single laser radar sensor.
The camera sensor group is formed by matching a fisheye camera, an infrared camera, a black-and-white camera, a depth camera and a wide-angle camera. The camera sensor group can be divided into two types according to different collocation modes of the camera sensors, one type is a camera sensor group consisting of the camera sensors needed in the first camera module, the other type is a camera sensor group consisting of the camera sensors needed in the second camera module, and the camera sensor group can also be divided into two types according to the collocation use conditions of the camera sensor group and the laser radar sensors, wherein one type is a sensor group consisting of 3 laser radar sensors (classified 1 with the laser radar sensors), and the other type is a sensor group consisting of pure camera sensors.
When the following sensor groups are arranged, the description will be mainly given according to 3 types of sensor groups, i.e., a sensor group 1 in which a laser radar sensor and a camera sensor are used in combination, a sensor group 2 including a single laser radar sensor, and a sensor group 3 including a pure camera sensor.
When the sensor groups are arranged, the sensor group 1 is arranged at the fixed positions of the nose, the front edge of the wing and the middle bottom of the fuselage of the target aircraft, and a plurality of groups of sensor groups are arranged at a distance of at least 2m and not more than 4m for aircraft sensors with longer wings, and are used for acquiring peripheral image data and obstacle distance data. The sensor group 2 is arranged at the middle bottom of the fuselage, and the direction is vertical to the ground, and is used for recording the height change condition of the aircraft and the ground. The sensor group 3 is arranged 10cm inside the outermost edge of the aircraft, oriented inwards, and mounted at the bottom of the aircraft, oriented vertically downwards, for acquiring image data of the bottom of the aircraft.
In addition, when the engine part should be avoided in the image picture obtained by the camera sensor in the sensor group, and a single camera sensor group with a longer length part cannot obtain comprehensive image data, a plurality of groups of camera sensor groups can be arranged, 2m is recommended to be arranged at intervals, and the images obtained by all the camera sensor groups can cover the whole aircraft close-distance surrounding environment area and the aircraft bottom covering area.
After the sensor groups are arranged, a grid calibration board is laid around the target aircraft by using a conventional calibration method to obtain images of each camera in each camera sensor group, and the threads for acquiring sensor data are synchronized by using a thread lock and a sensor number counting mode. For the same camera sensor, the internal and external parameters of each camera in the camera group are calculated by mapping the coordinate points of the image coordinate system acquired by the camera and the coordinate points of the real world coordinate system. And for the same laser radar sensor, calculating to obtain conversion matrix parameters of the coordinate points of the real world coordinate system, which are acquired by the laser radar sensor, mapped to the display image coordinate system by using the measured attitude information and position and the coordinate point data of the laid calibration plate in the corresponding direction. And mapping coordinate points of a coordinate system where a plurality of images acquired by all the camera sensors in a group of camera sensors are located on the same coordinate system, and calculating to obtain an image bottom-overlapped projection matrix of each camera acquired image on the same image pixel coordinate system.
And mapping the coordinate points of the real world coordinate system and the coordinate system of the display image of the display according to the directly measured position data of the corresponding key coordinate points in the real world coordinate system to obtain the mapping relation, thereby calculating and obtaining the mapping parameters of the vertical depth auxiliary line, the horizontal distance auxiliary line and the aircraft contour auxiliary line on the pixel coordinate system of the display image of the display under 6 views of the panorama, the front view, the rear view, the left view, the right view and the bottom view.
(2) Selecting the application scene (bright light/weak light/no light) and turning on the corresponding sensor device.
(3) Fusing contract orientation multi-type camera images:
for the same camera sensor group, images acquired by the camera sensors are converted into aerial views at single visual angles by utilizing internal and external parameters of each camera sensor, and then a plurality of aerial views in the fisheye camera aerial view, the infrared camera aerial view, the black-white camera aerial view, the depth camera aerial view and the long-focus camera aerial view under the same camera group are overlapped into one aerial view by utilizing the bottom-folded projection matrix according to scene requirements.
The fusion form of the aerial views of the various cameras according to different application scenes mainly comprises the following 3 types:
form 1: fish eye + long focus + depth.
Form 2: fish eyes + long coke + black and white + depth.
Form 3: fish eye + long focus + infrared.
The method for fusing the image and the image of the tele-camera comprises the following steps: and carrying out multi-layer filtering by adopting translation invariant discrete wavelet transform to form a high-frequency sub-image and a low-frequency sub-image of the two images. And performing high-frequency component fusion according to the high-frequency sub-images corresponding to the two images to form a high-frequency component fusion coefficient. And carrying out low-frequency component fusion according to the low-frequency sub-images corresponding to the two images to form a low-frequency component fusion coefficient. And performing translation invariant discrete wavelet inverse transformation according to the high-frequency component corresponding to the high-frequency component fusion coefficient and the low-frequency component corresponding to the low-frequency component fusion coefficient to generate a fusion image, thereby achieving the effect of optical zooming.
The method for fusing the image and the depth camera image comprises the following steps: the method comprises the steps of obtaining a depth area for a depth camera image by using pixel values of adjacent pixel points, obtaining area outline pixel point position subscripts by using a pixel point numerical approximation method, setting the ratio of values of outline subscript pixel points to the maximum subscript value as a weight value, and setting the pixel point weight of a non-outline area as 1 to obtain a new pixel point weight matrix. And (3) the pixel point weight matrix and the value of the corresponding point of the original image pixel matrix are multiplied and integrated, so that the fused image of the original image and the depth camera image is obtained, and compared with the original image, the sharpness of the edge line of an object in the image is increased, and the stereoscopic impression is more prominent.
The method for fusing the image and the black and white camera image comprises the following steps: the method comprises the steps of firstly obtaining a brightness component and a chrominance component of a fisheye camera image, fusing the brightness component and a black-and-white image, and averaging and replacing by utilizing a brightness component pixel matrix value and a black-and-white image pixel matrix value of a corresponding point to obtain a brightness fused component pixel matrix. And the target image is obtained by combining the brightness fusion component and the chrominance component, so that the image brightness in a dark scene can be obviously improved and noise can be reduced compared with the original image.
The method for fusing the image and the infrared camera comprises the following steps: and under a scene without light, the infrared light supplement lamp is started to acquire an infrared image.
(4) Synthesizing a panoramic view:
and (3) arranging the fused aerial view acquired in the step (2) through image position information, fusing the overlapped area, and then carrying out image brightness balance processing and color balance processing on the image.
The fusion mode adopted for the image overlapping area is to set a weight coefficient for each pixel point, wherein the weight coefficient changes along with the change of the pixel value of the point and continuously changes along with the coordinate distance from each pixel point to the boundary of the overlapping area. The main method for obtaining the weight coefficient comprises the following steps: taking out the overlapped area in the projection image to carry out graying and binarization processing, removing noise points of the binarized image through a morphological method, then judging the peripheral boundary of the overlapped area through the values of adjacent pixel points, obtaining a polygonal outline coordinate value by using an approximation method, and then obtaining the distance value from the pixel point to the external boundary of the overlapped area by using the coordinate of a target pixel point and the outline coordinate value after the nearest overlapped area. Then calculating the weight value from each pixel point in the overlapping area to the boundary area, setting the weight value of the pixel not in the overlapping area to obtain a continuously-changed weight matrix, and multiplying the pixel value of the image in the overlapping area by using the weight matrix to obtain a new pixel matrix, namely the pixel matrix on the fused image in the overlapping area, thereby obtaining the image in the overlapping area.
The brightness balance of the multiple groups of images adopts the steps that a coefficient is calculated for each channel of RGB 3 channels corresponding to all the processed camera images in different directions, the processed camera images are multiplied by the original channel numerical value, the dimming of an over-bright channel is smaller than 1, namely, the coefficient multiplied by the over-bright channel is larger than 1, and the dimming of an over-dark channel is larger than 1, so that the image with the adjusted brightness is finally formed.
And the color balance of the plurality of groups of images adopts a panoramic image after the image brightness balance, the average value of the total RGB three-channel numerical values in the pixel matrix is obtained, the RGB three-channel numerical values of each pixel point are multiplied by the average value by utilizing the original numerical value, and then the average value of the channel of the pixel matrix is divided by the average value, so that the pixel matrix of the panoramic image after the color balance is obtained.
(5) Auxiliary line adding:
and adding horizontal distance auxiliary lines, vertical depth distance auxiliary lines and aircraft contour auxiliary lines by using the auxiliary line adding module, wherein the horizontal distance auxiliary lines are represented by a plurality of groups of circles, the circles are annular, the distance between the circles is gradually reduced from outside to inside, the sidelines of each group of circles are horizontal distance lines, the distance between the circles is a reference for the actual horizontal distance, and the distance is calculated by length projection under a world coordinate system. The vertical depth auxiliary line is represented by a trapezoid and is divided into a front part and a rear part of the target aircraft, and a solid line is drawn at the intersection of the trapezoid and the circle, so that a stereoscopic visual effect is achieved. Wherein the vertical auxiliary lines are only added on the bird's-eye view panorama, and the horizontal auxiliary lines are respectively added on the panorama, the front, the rear, the left and the right visual angle images. The aircraft contour auxiliary line is a projection of the whole aircraft outer edge contour on an image pixel coordinate system and is used for distinguishing an aircraft external panoramic all-around view area and an aircraft internal panoramic all-around view area.
(6) Obstacle detection and early warning:
and if the distance is less than the obstacle identification error distance recorded in the parameter memory when no obstacle exists, the obstacle is marked in the direction, and a display area of the direction on the display is flickered. Obstacles at 50 meters, 100 meters and 200 meters are displayed by red, orange and yellow regular triangle images on the panoramic view image and the corresponding azimuth view angle image respectively. When the target aircraft drives away from the ground and does not reach the preset safe flying height, namely the distance value acquired by the laser radar sensor at the bottom of the aircraft is larger than the preset aircraft ground driving height value, and when the distance value is smaller than the preset safe flying height value, the obstacle boundaries of 50 meters, 100 meters and 200 meters do not use the preset aircraft ground driving height value any more, but use the bottom laser radar height ranging value to calculate. And if the signal that the target aircraft landing gear is retracted is detected, closing the obstacle early warning module.
(7) Displaying the comprehensive panoramic image:
the processed panoramic image is transmitted to a display for displaying at 20-30 frames per second, and a smooth panoramic circular view effect can be achieved. The user can exchange different orientation images displayed on the main interface and the secondary interface of the display in a touch control mode. The switch of the camera and the display interface is controlled by the retraction signal of the undercarriage and the main power switch signal, when the undercarriage of the aircraft is retracted, the camera sensor is closed, and the panoramic all-round looking system is closed; when the aircraft landing gear is unfolded, the camera sensor is turned on, and the panoramic all-around viewing system is started.
The invention has the advantages that:
1. the device can be provided with a camera group formed by combining a plurality of groups of fisheye cameras, black-and-white cameras, infrared cameras, depth cameras and long-focus cameras, and obtains a panoramic all-around view with higher quality on the premise that the camera group can cover a comprehensive monitoring area. Different camera combinations are applied under different scenes, so that a clearer local or global surrounding environment all-round visual field with higher identification degree is obtained.
2. In addition, the invention also combines a distance measuring device such as a laser radar and the like, adds various types of auxiliary lines such as a vertical depth line auxiliary line, a horizontal distance line auxiliary line and an aircraft contour auxiliary line through the information acquired by a plurality of sensors, can acquire the distance and the direction information of the obstacle through the laser radar distance measurement, directly draws the distance and the direction information in the panoramic image and displays the distance and the direction information on a display, and can more intuitively judge the position and the safety distance of the obstacle.
3. The image display is switchable, so that the image observation in the designated range is more flexible and delicate, when the image in the designated direction is displayed on the main interface, the resolution of the image displayed on the main interface is improved, the picture is finer, the resolution of the image displayed on the secondary interface is reduced, the operation of the obstacle detection module is mainly kept, the energy consumption and the burden of an arithmetic unit can be effectively reduced, and the image transmission delay is reduced.
In addition, the number of the sensor groups can be flexibly matched, if only a panoramic image of a specified direction needs to be checked, only the sensor groups of the corresponding direction can be assembled, and the panoramic image is only displayed in the main interface, so that the basic requirements are met and the cost is reduced.
4. By using the method and the device, the vertical depth perception and the horizontal distance perception can be further obtained while accurately mastering the surrounding environment conditions of the aircraft in the ground driving stage and the initial takeoff stage, the ground environment conditions of the peripheral and bottom areas of the aircraft can be known, the early warning of the obstacle can be realized, the distance between the aircraft and the obstacle can be accurately controlled, the range of a safe area can be determined, and the accident occurrence rate can be reduced.
Drawings
FIG. 1 is a block diagram schematically illustrating the structure of the airborne panoramic all-round looking device of the present invention.
Fig. 2A-B are schematic flow diagrams of the airborne panoramic look-around method of the present invention.
FIG. 3 is a diagram of an example of a panoramic all-around image of an aircraft with the addition of auxiliary lines in accordance with the present invention.
Fig. 4 is a schematic diagram of an example of the installation positions of the laser radar sensor group and the camera sensor group of the present invention.
FIG. 5 is a schematic diagram of the present invention for converting pixels of a real world image to pixels of a display image.
Fig. 6 is an exemplary schematic diagram of a front view and a top view of a camera sensor cluster of the present invention.
Fig. 7 is an exemplary schematic diagram of the front view of a sensor cluster composed of a camera sensor cluster and a lidar sensor of the present invention.
FIG. 8 is an exemplary schematic diagram of the top view of a sensor cluster of the present invention that is combined with a camera sensor cluster and a lidar sensor.
Fig. 9 is an exemplary schematic diagram of the side view of a sensor cluster formed by the combination of a camera sensor cluster and a lidar sensor in accordance with the present invention.
FIG. 10 is a schematic illustration of an example of the invention in which calibration plates are laid around a target aircraft.
Fig. 11 is an exemplary diagram of a layout of orientation views displayed by the display of the present invention.
FIG. 12 is an exemplary schematic diagram of the display of the target aircraft of the present invention showing the situation when an obstacle is detected at the front, rear, and left 3 azimuths.
1. A first camera module;
2. a second camera module;
3. a group formed by 3 lidar sensor combinations;
301. a laser radar sensor with a 200 m limit;
302. a 100 meter boundary lidar sensor;
303. a 50 meter limit lidar sensor;
4. a group formed by a combination of individual lidar sensors;
5. a black and white camera sensor;
6. a fisheye camera sensor;
7. a camera sensor orientation fine-tuning device;
8. a rotating device;
9. a telescoping device;
10. a camera sensor group integrated housing;
11. a tele-camera sensor;
12. a depth camera sensor;
13. an infrared light supplement lamp;
14. a camera sensor group;
15. a shielding cover adjusting frame;
16. a shielding cover;
17. and a control device for the shielding cover adjusting frame.
Detailed Description
The invention will be further illustrated with reference to specific embodiments. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications can be made to the present invention by those skilled in the art after reading the present specification, and these equivalents also fall within the scope of the invention defined by the appended claims.
Example 1
The present embodiment provides an apparatus and a method for panoramic viewing of an aircraft, as shown in fig. 1, the apparatus includes a switch control group, a lidar sensor group, a camera sensor group, and a panoramic view processing and display platform. The switch control group is equipped with the power through driver manual control and opens or close the device, sensor group automatically regulated to assigned position and position when the device is opened, come to open required laser radar sensor group and camera sensor group through using scene controller and picture display control module, obstacle distance azimuth data and surrounding image data transmission that laser radar sensor group and camera sensor group obtained are to panorama view processing and display platform, gather the image synchronization with the same camera sensor group in azimuth area earlier and fuse, handle such as regulation and processing, the image concatenation after handling with the different camera sensor group in azimuth area fuses into a panorama view again. And adding auxiliary lines to the panoramic view according to the intrinsic parameter data of the distance and azimuth data acquired by the laser radar sensor group, detecting obstacles in real time, displaying the detected obstacles in the panoramic view, and finally displaying the comprehensive panoramic view on a display.
The switch control group can select a bright scene button, a weak scene button and a no scene button. The bright scene button is used in the environment with good normal visible light, the dim scene button is used in the environment with weak visible light such as cloudy days, and the dark scene button is used in the environment without light such as dark night.
And the bright scene button controls a first camera module in the camera sensor group to start the fisheye camera, the depth camera and the long-focus camera, and a second camera module starts the fisheye camera, the black-and-white camera, the depth camera and the long-focus camera. And the dim light scene button controls a first camera module in the camera sensor group to start the fisheye camera, the black-and-white camera, the depth camera and the long-focus camera, and a second camera module starts the fisheye camera, the black-and-white camera, the depth camera and the long-focus camera. And controlling a first camera module in the camera sensor group to start a fisheye camera, an infrared camera and a long-focus camera by a non-light scene button, and starting the fisheye camera, the infrared camera and the long-focus camera by a second camera module.
The screen display control module divides the screen of the touch screen display into 1 main interface and 5 secondary interfaces, as shown in fig. 11, the main interface is located at the upper left corner of the screen and used for displaying the panoramic aerial view of the target aircraft, and the secondary interfaces are located below and on the right of the main interface. Areas 2 to 6 in the figure respectively show a front azimuth panoramic view, a rear azimuth panoramic view, a left azimuth panoramic view, a right azimuth panoramic view and a bottom azimuth panoramic view of the target aircraft. The driver can convert the touch control sub interface into the main interface for picture display.
The distribution positions of the laser radar common sensor groups and the camera sensor groups in the aircraft are shown in fig. 3, wherein 1 represents a first camera module of the camera sensor group, 2 represents a second camera module of the camera sensor group, 3 represents a group formed by combining 3 laser radar sensors in the camera sensor group, and 4 represents a group formed by combining a single laser radar sensor in the camera sensor group, and the positions of a plurality of symbol combinations represent that the positions of the sensor groups designated by the symbols are consistent. The single group of multi-type camera sensors include but are not limited to an ultra-wide angle of 180-230 degrees, fisheye cameras with more than 30 ten thousand pixels and minimum 640 × 480 resolution, infrared cameras, black-and-white cameras, depth cameras and telephoto cameras, the group of multi-type cameras are required to be placed in a tight and fixed orientation, the fisheye cameras in the camera group are opened, rotating shafts arranged on the fisheye cameras in the camera group are adjusted up and down according to the check paper of the calibration plate (the pattern of the calibration plate and the arrangement mode of the calibration plate around the aircraft are shown in figure 10), so that the pictures obtained by the cameras can cover the inner edge and the outer edge of the calibration plate, the covered pictures can be effectively spliced with the images obtained by the adjacent cameras after conversion, finally, the pictures obtained by all the camera sensors can cover the 360 degrees of the exterior and the bottom environment of the aircraft, the rotating angle of the rotating shaft is recorded and stored as the rotating parameter of the orientation camera, and the parameter can be directly called in the subsequent use to automatically rotate the direction angle of the camera.
The schematic diagram of the front view angle and the side view angle of the single group of camera sensor groups is shown in fig. 6, wherein 5 represents a black-and-white camera sensor; 6 denotes a fisheye camera sensor; 7, a camera sensor position fine-tuning device, which is used for finely adjusting the position of the camera before the device is installed, so that the centers of the pictures acquired by 4 cameras in the device are positioned at one point; 8 denotes a rotating device that can rotate the sensor group up and down to a specified angle by point control; 9, a telescopic device is used for pushing out and retracting the sensor group into the groove to prevent the device from influencing the normal flight of the target aircraft, and 10, an integrated shell of the camera sensor group is shown; 11 denotes a telephoto camera sensor; 12 denotes a depth camera sensor; and 13, an infrared fill-in lamp.
The camera sensor is used for acquiring the data range of the surrounding environment image of the aircraft, wherein the data range is between 1 meter and 10 meters of the target aircraft. And adjusting the azimuth angle of the laser radar sensor group formed by combining 3 laser radar sensors to respectively point to the ground distances of 50 meters, 100 meters and 200 meters of the distance sensor.
The schematic diagrams of the front view, the top view and the side view of the sensor group device of the combination of the camera sensor group and the lidar sensor group are respectively shown in fig. 7, 8 and 9, wherein 14 represents the camera sensor group, 301 represents the lidar sensor with 200 m limit, 302 represents the lidar sensor with 100 m limit, 303 represents the lidar sensor with 50 m limit, 8 represents the rotating device, 15 represents the shielding cover adjusting frame, 16 represents the shielding cover, 17 represents the shielding cover adjusting frame control device, and 9 represents the telescopic device. The shielding cover is used for shielding the groove when the device retracts into the groove, and the target aircraft is prevented from being influenced by airflow introduction during flight. The shielding cover adjusting frame control device can control the shielding cover adjusting frame to open and close the shielding cover.
In addition, in fig. 3, the triangle and the square both represent the sensor group, the x-axis virtual line is made by the aircraft nose center and the tail center, the y-axis virtual line is made by the aircraft left and right wing centers, and they are combined into a coordinate system, wherein the direction of the third vertex outside two points on the line where the connecting line of the two points of the triangle is parallel or perpendicular to the coordinate axis represents the orientation of the sensor group, wherein the outward direction represents the direction of the sensor toward the external environment outside the aircraft bottom environment, i.e. the direction of the uncovered part of the aircraft under the bird's eye view, the opposite inward direction represents the direction of the sensor toward the aircraft bottom environment, i.e. the direction of the covered part of the aircraft under the bird's eye view, and the square symbol represents the sensor group is perpendicular to the ground toward the right below the aircraft. The distribution condition of the cameras under the same straight line is simply referred to, and the distribution arrangement of a plurality of groups in the same direction can be expanded and extended.
The panoramic view processing and reality platform can be built on an AGX Xavier computing platform and comprises an image display selection module, a distance and azimuth data acquisition module, an image data acquisition module, an equipment synchronization module, an image processing module, a parameter memory, a central arithmetic unit and a display. Wherein:
and the image display selection module is used for acquiring signal input of the image display control module in the switch control group, matching the laser radar sensor and the camera sensor equipment number in the corresponding area according to the image display control signal, binding corresponding camera internal and external parameters, acquiring conversion matrix parameters of a data point world coordinate system converted to an image pixel coordinate system and position information in the position by the laser radar sensor, and transmitting the parameters to the distance and position data acquisition module and the image data acquisition module. The selectable displays include the panoramic, front, rear, left, right, bottom 6 orientations of the target aircraft.
Another possible implementation is to subdivide the display orientation again, select to turn on the corresponding sensor mainly by marking the orientation sensor device number, and synthesize a panoramic image of the specified orientation. In a further possible embodiment, a plurality of sensor devices with different azimuth angles are combined, and image display is performed in the form that adjacent azimuth images are combined and non-adjacent azimuth images are displayed separately during panoramic image synthesis.
And the equipment synchronization module is used for synchronizing the thread of the laser radar sensor group for acquiring the distance and the direction and the thread of the different camera sensors in the camera sensor group for acquiring the images. The main strategy used by the module is to record the total number of sensors in the image display orientation area designated by the image display selection module, add a counter, add 1 to the counter when each sensor acquires data, record that the task is completed, enter a sleep pool to wait for the next task, and trigger to wake up all threads to enter the next task cycle once the counter counts up to the recorded total number of sensors.
And the image processing module is used for processing the images acquired by the camera sensor, combining the distance and direction data acquired by the laser radar sensor with the image data, synthesizing a panoramic image, and performing auxiliary line adding and obstacle early warning area marking. The module comprises an image adjusting module, a multi-camera image fusion module in the same region, a panorama synthesis module, an auxiliary line adding module and an obstacle early warning module, wherein:
the image adjusting module can be used for receiving images of the same group and multiple types of cameras input by the equipment synchronization module in one implementation mode, converting the images into a bird's-eye view plan by utilizing the internal and external parameters of the cameras obtained by calibrating the cameras, and outputting the converted images to the image fusion module with the multiple cameras in the same area. In another embodiment, the method and the device can be used for receiving the image synthesized by the panorama synthesis module, and transmitting the image to the auxiliary line adding module after brightness balance adjustment and color balance adjustment are performed on the image.
The method for calibrating the camera used in the image adjusting module comprises the following steps:
step 101: initializing, and distributing storage space for camera parameters and corner points of all images;
step 102: reading an image and carrying out corner detection by using a Shi-Tomasi algorithm;
step 103: refining the coordinates of the angular points and drawing the extracted angular points;
step 104: for the image from which the corner point has been successfully extracted, storing the coordinate values of the corner point in the world coordinate system and the sub-pixel coordinate values in the image coordinate system, wherein the transformation diagram of points in a plurality of coordinate systems is shown in fig. 5;
step 105: calibrating, wherein the used calibration method is a conventional calibration method, and a schematic diagram of calibration plates laid around the target aircraft before calibrating is shown in FIG. 10;
step 106: and analyzing the error of the calibration result, and carrying out the calibration process again if the error is overlarge.
The brightness balance adjusting method in the image adjusting module is to calculate a coefficient for each channel of RGB 3 channels of the fused images of all the camera groups in different directions, and the total number of the pictures returned by the N camera sensor groups is 3N channels. Because each channel of different images has color difference, the value of an excessively bright channel or an excessively dark channel needs to be adjusted, 3N coefficients are calculated, the 3N coefficients are multiplied on the 3N channels respectively, the factor multiplied by the dimming of the excessively bright channel is smaller than 1, the factor multiplied by the dimming of the excessively dark channel is larger than 1, and finally, a picture with adjusted brightness is formed. The calculation is made by the following formula:
Figure BDA0003936728100000151
Figure BDA0003936728100000161
Figure BDA0003936728100000162
Figure BDA0003936728100000163
wherein M is 1 For the pixel matrix of the first camera image of two adjacent cameras, M 2 The method comprises the steps that a pixel matrix of a second camera image of two adjacent cameras is obtained, W represents a corresponding adjustment coefficient matrix after graying and binarization, lr is the average brightness ratio of pictures of the two adjacent cameras, lr 'is the average brightness ratio of the spliced image of the two current cameras, lr' is the average brightness ratio of the spliced image of other cameras except the two current cameras, L is the average brightness ratio of the spliced image in different directions, L 'is a brightness coefficient, P is a pixel matrix of an original image, and P' is a pixel matrix after brightness adjustment.
The method for adjusting the color balance in the image adjusting module comprises the steps of solving the average value of the total values of the RGB three channels in the pixel matrix of the panoramic image after the image brightness is balanced, and multiplying the original value of each channel by the total average value and the channel average value to obtain the RGB three channels of the image after the color balance. Corresponding to the following equation:
Figure BDA0003936728100000164
Figure BDA0003936728100000165
wherein M is R 、M B 、M G Is the three primary color matrix of the image after brightness adjustment, R, G, B is the three primary color value of a certain pixel point on the image, R W 、G W 、B W The pixel point is the three primary color value after color adjustment.
And the same-region multi-camera image fusion module is used for overlapping the images of the same-direction and same-group multi-type cameras transformed by the image adjusting module on an image pixel coordinate system by utilizing a bottom-overlapped projection matrix in the parameter storage to fuse the images of the single-direction multi-camera images into a single one-direction image.
The method for fusing the images of the same-direction and same-group multi-type cameras comprises the following steps:
and (3) fusing the image with the image of the long-focus camera: the translation invariant discrete wavelet transform is adopted to carry out multi-layer filtering to form high-frequency sub-images and low-frequency sub-images of the two images. And performing high-frequency component fusion according to the high-frequency sub-images corresponding to the two images to form a high-frequency component fusion coefficient. And carrying out low-frequency component fusion according to the low-frequency sub-images corresponding to the two images to form a low-frequency component fusion coefficient. And performing translation invariant discrete wavelet inverse transformation according to the high-frequency component corresponding to the high-frequency component fusion coefficient and the low-frequency component corresponding to the low-frequency component fusion coefficient to generate a fusion image, thereby achieving the effect of optical zooming.
Fusing the image with the depth camera image: the method comprises the steps of obtaining a depth area for a depth camera image by utilizing pixel values of adjacent pixel points, obtaining an area outline subscript by utilizing an approximation method, setting the ratio of the value of a contour line subscript pixel point to the maximum value of the subscript as a weight value, setting the weight value of a pixel point in a non-outline area as 1, and obtaining a new pixel point weight matrix. And (3) the pixel point weight matrix and the value of the corresponding point of the original image pixel matrix are multiplied and integrated, so that the fused image of the original image and the depth camera image is obtained, and compared with the original image, the sharpness of the edge line of an object in the image is increased, and the stereoscopic impression is more prominent.
Image and black and white camera image fusion: the method comprises the steps of firstly obtaining a brightness component and a chrominance component of a fisheye camera image, fusing the brightness component and a black-and-white image, and averaging and replacing by utilizing a brightness component pixel matrix value and a black-and-white image pixel matrix value of a corresponding point to obtain a brightness fused component pixel matrix. And the target image is obtained by combining the brightness fusion component and the chrominance component, and compared with the original image, the image brightness in a dark scene can be obviously improved, and noise is reduced.
The image fusion with the infrared camera adopts a virtual fusion mode, and the practical application is that the infrared light supplement lamp is started to acquire the infrared image in a dark scene.
And the panorama synthesis module is used for synthesizing the image fused by the plurality of cameras in the same direction into a panorama according to the position information of the sensor in the direction. The method for fusing the image overlapping regions in the module mainly comprises the following steps:
step 201: taking out an overlapping area in the fused image of the two adjacent camera sensor groups;
step 202: graying and binarizing the image of the overlapping area obtained in the step 201;
step 203: removing noise points of the image processed in the step 202 by using morphological operation, specifically sliding the structural element on the original image, and setting the gray value of the image pixel point at the anchor point position of the structural element as the minimum value of the pixels of the image area corresponding to the area with the structural element value of 1. Is formulated as follows:
Figure BDA0003936728100000171
where element is a structural element, (x, y) is the position of the anchor point O, x 'and y' are the positional offsets of the pixel with the structural element value of 1 with respect to the anchor point O, src represents the original image, and dst represents the result map.
Step 204: detecting the boundary outside the overlapping area of the image processed in the step 203 by using an approximation method;
step 205: calculating the weight value from each pixel point in the overlapping area to the boundary area, solving the distance from the pixel point to two boundaries of the overlapping area by using the coordinate of the target pixel point and the contour coordinate value after the nearest overlapping area, and acquiring the weight value of each pixel point in the overlapping area by using the distance ratio;
step 206: and setting the weight of the pixels in the non-overlapping area to obtain a continuously-changed weight matrix. Setting all the weights of pixel points which are not in an overlapping area in the first camera images of two adjacent cameras to be 1, setting all the weights of the pixel points which are not in the overlapping area in the second camera images of the two adjacent cameras to be 0, and combining the weight matrix of each pixel point in the overlapping area to obtain a continuously-changed matrix W with the value range of 0-1;
step 207: and fusing the images in the overlapping area by using the weight matrix, wherein the pixel matrix of the fused image is as follows:
I f =W*I 1 +(1-W)*I 2
wherein W is a weight matrix corresponding to the pixel point, I f For a matrix of pixels of the fused image, I 1 For the first image pixel matrix of two adjacent cameras, I 2 A first camera image pixel matrix of two adjacent cameras is used;
and the auxiliary line adding module is used for directly adding the depth auxiliary line, the horizontal distance auxiliary line and the aircraft contour auxiliary line on the panoramic image adjusted by the image adjusting module according to the preset distance in the parameter memory. As shown in fig. 4, the horizontal distance support lines are represented by 3 sets of circles, and the widths of the circles gradually decrease from the outside to the inside. When drawing the circular diagram, the center of the panoramic diagram is used as the circle center, the coordinates of the upper left corner of the world coordinate system calibration plate are mapped into the image pixel coordinate system, the distance between the circle center coordinate point and the mapped calibration plate coordinate point is used as the radius of the innermost circle, and the circle filled with black transparent frames inside is drawn. The layer 2, layer 3 and layer 4 circles are respectively extended by 3m, 6m and 10m based on the radius of the circle of the above layer, and the lengths of the circles mapped to the image pixel coordinate system are used as the radius of the circle of the layers 2, 3 and 4 to form 3 circular rings. The depth auxiliary line is expressed by utilizing front and rear 2 isosceles trapezoids, the front trapezoid and the rear trapezoid are symmetrical according to the circle center, the length of the lower bottom edge of the front trapezoid is coincided with the upper edge of the calibration plate, the minimum bottom angle of the isosceles trapezoids is set to be 45 degrees, four points of the trapezoid are required to fall on the edges of the layer 1 circle and the layer 4 circle respectively to draw the trapezoid, the solid line is drawn at the intersection of the trapezoid and the circle, the stereoscopic visual effect is achieved, and the length of the section where the height of the trapezoid and the circle are crossed represents the depth distance. The aircraft contour auxiliary line is the projection of the whole aircraft outer edge contour on an image pixel coordinate system.
Because there are many obstructions in the bottom environment, such as power plants, which occupy a non-negligible space, it is necessary to add an auxiliary line of the aircraft contour for distinguishing the aircraft outer panoramic view area from the aircraft bottom panoramic view area (i.e., the bottom ground condition covered by the aircraft under the bird's eye view).
And the obstacle early warning module is used for detecting the information of the obstacle in front of the target aircraft and drawing an obstacle warning identification map on the panoramic view. The method mainly used in the module comprises the following steps: and if the distance is less than the recorded obstacle identification error distance in the parameter memory when no obstacle exists, marking the obstacle in the direction of the target aircraft, and flashing a display area corresponding to the direction on the display. Obstacles at 50 meters, 100 meters and 200 meters are respectively marked by red, orange and yellow regular triangle images on the panoramic view image and the corresponding azimuth view angle image. As shown in fig. 12, it is shown that the obstacle information is detected between 50 and 100 meters in front of the target aircraft, within 50 meters behind the target aircraft, and within 100 to 200 meters to the left of the target aircraft. When the target aircraft drives away from the ground and does not reach the preset safe flying height, namely the distance value acquired by the laser radar sensor at the bottom of the aircraft is larger than the preset aircraft ground driving height value, and when the distance value is smaller than the preset safe flying height value, the obstacle boundaries of 50 meters, 100 meters and 200 meters do not use the preset aircraft ground driving height value any more, but use the bottom laser radar height ranging value to calculate. And if the signal that the target aircraft landing gear is retracted is detected, closing the obstacle early warning module.
The parameter memory is used for storing pre-calibrated internal and external parameters of the camera, the equipment number and the position number of each azimuth sensor, the stretching parameters and the rotating parameters of each sensor group, conversion matrix parameters of a data point world coordinate system converted to an image pixel coordinate system obtained by the laser radar sensor, image bottom-overlapping projection matrix parameters of each azimuth multi-type camera, height parameters of a target aircraft from the ground before flying, safe flying height parameters of the target aircraft, obstacle identification error distance parameters, height error distance parameters of a flying-off ground, depth auxiliary lines, horizontal distance auxiliary lines and pixel matrix parameters of the aircraft contour auxiliary lines in a preset image pixel coordinate system. The parameters stored in the parameter register are manually set adjustable parameters, and need to be set before the target aircraft actually runs.
The central arithmetic unit is used for realizing related arithmetic in the image processing module, and can use a high-speed computing platform with high computational power such as AGX Xavier to process the arithmetic in and out.
The display is used for displaying the processed comprehensive panoramic image view, can be a plane display, and can also be projected to the head-mounted helmet display through images. The processed panoramic image is transmitted to a display for displaying at 20-30 frames per second, so that a smooth panoramic circular view effect can be achieved. The switches of the camera and the display are controlled by a retraction signal of the undercarriage, the flying height of the aircraft and a main power switch signal, and when the undercarriage of the aircraft is retracted and the flying height of the target aircraft detected by the bottom laser radar sensor reaches the safe flying height, the panoramic view image is not displayed any more when the camera is closed; when the aircraft landing gear is unfolded, the camera is opened, and the panoramic all-round view image is recovered.
The invention also provides an embodiment of a flow for implementing the aircraft panoramic looking-around method, as shown in fig. 2, the specific flow is as follows:
step 301: starting a system power supply, if operation parameter acquisition operation is not carried out, firstly calculating coordinate points under a plurality of coordinate systems through a calibration board to convert and acquire corresponding parameters, and if the parameters are acquired, directly carrying out step 302;
step 302: a driver judges the current environment condition and selects a corresponding equipment operation scene (bright light/dim light/no light) by using an application scene controller;
step 303: acquiring an aircraft landing gear signal, and judging whether equipment is started or closed;
step 304: selecting a direction picture to be displayed through a picture display control module, opening different camera sensor devices by combining an application scene selected by an application scene controller, synchronously acquiring image data by a thread, and determining obstacle detection calculation modes on boundaries of 50 meters, 100 meters and 200 meters through a laser radar sensor at the bottom of a machine body to acquire distance and direction data;
step 305: processing images of the camera sensors through the data processing module, and fusing images acquired by the multiple camera sensors in the same group through the multiple camera image fusion module in the same region;
step 306: splicing the processed images in the appointed direction of the images obtained in the step 305 into a panoramic view through a panoramic image synthesis module;
step 307: adjusting the brightness balance of the panoramic image synthesized in the step 306 through an image adjusting module and an auxiliary line adding module, and adding auxiliary lines into the panoramic image;
step 308: the obstacle distance azimuth data acquired by the obstacle early warning module flashes the display area of the corresponding azimuth on the display, and the panoramic image obtained in step 307 and the image of the corresponding azimuth viewing angle are visually displayed (the obstacles at 50 m, 100 m and 200 m are respectively marked by red, orange and yellow regular triangle images);
step 309: and displaying the comprehensive panoramic image.
In the step 301, when the camera is not calibrated, the switch control module is used to turn on the device, and an image of the surrounding environment of the target aircraft is obtained from multiple groups of multiple types of camera sensors in the first and second camera modules, where the image needs to include a black and white grid pattern of the calibration plate covering the inner edge and the outer edge of the calibration plate, or at least 1 layer of calibration plate, and pictures obtained by all camera sensors can cover the environment outside and at the bottom of the aircraft by 360 degrees. The method comprises the steps of setting the length and the width of a bird's-eye view, the length and the width from an inner ring to an outer ring of a calibration plate, and the distance from the calibration plate to a target aircraft, taking the upper left corner of the bird's-eye view as an original point, measuring values of coordinate points of 4 world coordinate systems obtained according to the length and the width of grids of the calibration plate, and acquiring internal parameters and distortion coefficients of each camera sensor by utilizing the imaging principle of a pinhole camera model. And converting the distorted camera sensor original pattern from a world coordinate system to a camera coordinate system image by adjusting rigid body transformation operations such as image horizontal and vertical scaling and a central coordinate point. And the transformed image is sequentially selected in an image physical coordinate system according to 4 preset key coordinate points, so that a mapping matrix of the camera sensor is obtained, perspective projection is carried out by combining an internal reference matrix and a distortion coefficient of the camera sensor, and the image is converted into the image coordinate system from the camera coordinate system. At the moment, a bird's-eye view of a single camera sensor is obtained, parameters of the bird's-eye view with serious distortion are set, and camera internal and external reference storage is carried out on the normal bird's-eye view. In addition, according to the preset coordinate points on the image, the image bottom-overlapped projection matrix of the image pixel coordinate system converted from the camera sensors of different types in the same set of camera groups is solved. And acquiring a conversion matrix parameter of a data point world coordinate system converted to an image pixel coordinate system, a depth auxiliary line, a horizontal distance auxiliary line and a pixel matrix of an aircraft contour auxiliary line in a preset image pixel coordinate system for the laser radar sensor, and calculating by actually measuring distances and azimuth distances of a plurality of coordinate points relative to the calibration plate. In addition, fixed parameters such as the equipment number and the position of each azimuth sensor, the height of the target aircraft from the ground before the target aircraft is lifted off, the safe flying height of the target aircraft, the obstacle identification error distance, the height error distance from the ground, and the like need to be set and measured and stored in the parameter memory. If camera sensor calibration has already been performed, step 301 may proceed directly to step 302 using real-time data acquired by the camera sensor and the lidar sensor.
In step 302, the display orientation image of the main interface can be converted at any time according to the actual driving situation for the scene controller button and the secondary interface of the display area of the touch display.
In step 305, image distortion correction and transformation are performed on the fisheye camera images, then corresponding transformation is performed on other camera images to obtain the bird's-eye views of the camera images, and the bird's-eye views of the cameras of different types in the same set of camera groups are overlapped to obtain a new bird's-eye view.
In step 306, when the images are spliced into the panoramic view, the bird's-eye view is sorted by the sensor position and rotated according to a preset angle, the images in the non-overlapped area are mapped to the image pixel matrix of the corresponding position of the panoramic view in an original state, and the images in the overlapped area are fused according to the weight matrix with smooth pixel points. And then the synthesized panoramic image is subjected to brightness balance adjustment and color balance adjustment.
The auxiliary line in step 307 is added in an auxiliary line adding module in the image processing module, the module obtains the synthesized panoramic annular view of the aircraft, and adds a depth auxiliary line, a horizontal distance auxiliary line and an aircraft contour auxiliary line on the basis of the panoramic annular view, the horizontal distance auxiliary line is drawn by using a circular graph, the depth auxiliary line is drawn by using a front trapezoid graph and a back trapezoid graph, the trapezoid graphs and the circular graph are combined to enable the circular graph to have a stereoscopic vision feeling, and the aircraft contour auxiliary line is scaled in a same ratio through a bird's eye view of the original target aircraft in a world coordinate system and is drawn only on the outer edge of the aircraft contour auxiliary line.
In the step 308, obstacle early warning is performed, information of obstacles is detected through the laser radar sensor group, then the laser radar sensor is used for obtaining a conversion matrix of a data point world coordinate system to an image pixel coordinate system, numerical data are visualized, the obstacles are represented by triangular symbols and drawn in a panoramic view and on a view angle image of a corresponding direction, and a display area of the corresponding direction on a display carries out flicker prompting.
Step 309 is to transmit the panoramic annular view image synthesized by the image processing module to the display for displaying, and to form a real-time panoramic annular view by using the image display with a high frame rate.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and additions can be made without departing from the principle of the present invention, and these should also be considered as the protection scope of the present invention.

Claims (10)

1. An airborne panoramic all-round looking device is characterized by comprising a switch control group, a laser radar sensor group, a camera sensor group, a panoramic view processing and displaying platform, wherein the switch control group comprises a power controller, an application scene controller and a picture display control module, and is used for controlling the device to be turned on and off, switched between application scenes and switched between display pictures; the laser radar sensor group is used for acquiring distance and azimuth data between an obstacle and a target aircraft; the camera sensor group comprises a first camera module and a second camera module, and is used for collecting image data of the surroundings and the bottom environment of the target aircraft in real time; the panoramic view processing and reality platform comprises an image display selection module, a sensor group azimuth adjustment module, a distance azimuth data acquisition module, an image data acquisition module, an equipment synchronization module, an image processing module, a parameter memory, a central arithmetic unit and a display, and is used for automatically adjusting the azimuth of the sensor group, synchronously fusing, adjusting and processing images acquired by camera sensor groups with the same azimuth area, splicing and fusing the images processed by the camera sensor groups with different azimuth areas into a panoramic view, adding an auxiliary line according to the distance azimuth data acquired by the laser radar sensor groups and the intrinsic parameter data of an aircraft in the panoramic view, detecting obstacles in real time and displaying the obstacles in the panoramic view, and finally displaying the comprehensive panoramic view on the display.
2. The apparatus of claim 1, wherein the application scene controller comprises a bright scene button, a dim scene button, and a no-bright scene button, the bright scene button controlling a first camera module of the camera sensor group to turn on a fisheye camera, a depth camera, and a tele camera, and a second camera module to turn on a fisheye camera, a black-and-white camera, a depth camera, and a tele camera; the dim light scene button controls a first camera module in the camera sensor group to start a fisheye camera, a black-and-white camera, a depth camera and a long-focus camera, and a second camera module starts the fisheye camera, the black-and-white camera, the depth camera and the long-focus camera; the non-light scene button controls a first camera module in the camera sensor group to start a fisheye camera, an infrared camera and a long-focus camera, and a second camera module starts the fisheye camera, the infrared camera and the long-focus camera.
3. The apparatus of claim 2, wherein the screen display control module divides the touch screen display screen into a primary interface and a secondary interface, the primary interface displaying the panoramic aerial view and the bottom orientation view of the target aircraft, and the secondary interface displaying the panoramic views of the front, rear, left, right, and bottom orientations of the target aircraft.
4. The apparatus of claim 3, wherein the lidar sensor group is located at the nose, the leading edge of the wing, and the middle bottom portion of the fuselage at the outer edge of the aircraft, and is used for obtaining the distance and the orientation of the obstacle and the distance between the target aircraft and the ground.
5. The apparatus of claim 4, wherein the image display selection module is configured to obtain signal inputs from the image display control modules in the switch control group, match the numbers of the lidar sensors and the camera sensors in corresponding areas according to the image display control signals, bind corresponding internal and external parameters of the camera sensors, the lidar sensor conversion matrix parameters, and the sensor group orientation parameters, and transmit the parameters to the distance and orientation data obtaining module and the image data obtaining module.
6. The device as claimed in claim 5, wherein the sensor group orientation adjustment module is configured to obtain the switch signal of the switch control group and the rotation parameter and the expansion parameter in the parameter storage, control the position and the orientation of the sensor group, open the sensor group shield cover when receiving the start signal of the switch control group, extend the sensor group out of the groove to a designated position according to the expansion parameter, adjust the up-down angle according to the rotation parameter, and retract the sensor group into the groove and close the groove with the shield cover when receiving the close signal of the switch control group.
7. The apparatus according to claim 6, wherein the distance and orientation data acquisition module and the image data acquisition module respectively turn on corresponding lidar sensor devices in the lidar sensor group and corresponding camera sensor devices in the camera sensor group through the device numbers input by the image display selection module, and respectively transmit the acquired distance and orientation data and image data and corresponding device parameters to the device synchronization module.
8. The apparatus of claim 7, wherein the device synchronization module is configured to synchronize a thread of the lidar sensor group acquiring distance and azimuth and a thread of different camera sensors in the camera sensor group acquiring images, and transmit data to the image processing module.
9. The device of claim 8, wherein the image processing module is used for processing the image acquired by the camera sensor, fusing the distance and direction data acquired by the laser radar sensor with the image data, synthesizing a panoramic image, and performing auxiliary line adding and obstacle early warning area marking; the parameter memory is used for storing parameters inside and outside the camera, the equipment number and the position number of each azimuth sensor, the stretching parameter and the rotating parameter of each sensor group, the conversion matrix parameter of a data point world coordinate system obtained by a laser radar sensor to be converted into an image pixel coordinate system, the image bottom-overlapping projection matrix parameter of each azimuth multi-type camera, the height parameter of a target aircraft from the ground before flying, the safe flying height parameter of the target aircraft, the obstacle identification error distance parameter, the flying height error distance parameter and the depth auxiliary line of the target aircraft, the horizontal distance auxiliary line and the pixel matrix parameter of the aircraft contour auxiliary line in a preset image pixel coordinate system; the central arithmetic unit is used for realizing related arithmetic in the image processing module; and the display is used for displaying the processed comprehensive panoramic image view.
10. A method for panoramic surround view using the apparatus of any of claims 1-9, comprising the steps of:
(1) Arranging a sensor group, and acquiring, storing and calculating parameters;
(2) Selecting an application scene (bright light/weak light/no light), and turning on corresponding sensor equipment;
(3) Fusing the camera images in the same direction;
(4) Synthesizing a panoramic view;
(5) Adding an auxiliary line;
(6) Detecting and early warning obstacles;
(7) And displaying the comprehensive panoramic image.
CN202211406240.2A 2022-11-10 2022-11-10 Airborne panoramic all-round looking device and method Pending CN115914815A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211406240.2A CN115914815A (en) 2022-11-10 2022-11-10 Airborne panoramic all-round looking device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211406240.2A CN115914815A (en) 2022-11-10 2022-11-10 Airborne panoramic all-round looking device and method

Publications (1)

Publication Number Publication Date
CN115914815A true CN115914815A (en) 2023-04-04

Family

ID=86475466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211406240.2A Pending CN115914815A (en) 2022-11-10 2022-11-10 Airborne panoramic all-round looking device and method

Country Status (1)

Country Link
CN (1) CN115914815A (en)

Similar Documents

Publication Publication Date Title
KR101068329B1 (en) Systems and methods for providing enhanced vision imaging
KR101023567B1 (en) Systems and methods for providing enhanced vision imaging with decreased latency
US11106203B2 (en) Systems and methods for augmented stereoscopic display
JP3383323B2 (en) Virtual image display system for aircraft
CN104851076B (en) Panoramic looking-around parking assisting system and camera installation method for commercial car
US8040361B2 (en) Systems and methods for combining virtual and real-time physical environments
CN110758243B (en) Surrounding environment display method and system in vehicle running process
US20100182340A1 (en) Systems and methods for combining virtual and real-time physical environments
CN105354796B (en) Image processing method and system for auxiliary of driving a vehicle
CN104890875A (en) Multi-rotor-wing unmanned aerial vehicle for panoramic shooting
CN102164274A (en) Vehicle-mounted virtual panoramic system with variable field of view
CN104601953A (en) Video image fusion-processing system
US20190141310A1 (en) Real-time, three-dimensional vehicle display
JP2010018102A (en) Driving support device
US9390558B2 (en) Faux-transparency method and device
CN204726673U (en) The many rotor wing unmanned aerial vehicles of pan-shot
US9726486B1 (en) System and method for merging enhanced vision data with a synthetic vision data
DE102005055879A1 (en) Air Traffic guide
JP3252129B2 (en) Helicopter operation support equipment
CN115914815A (en) Airborne panoramic all-round looking device and method
CN204297108U (en) Helicopter obstacle avoidance system
CN111818275A (en) All-region-covered vehicle-mounted panoramic monitoring system
CN212324233U (en) All-region-covered vehicle-mounted panoramic monitoring system
CN113436134A (en) Visibility measuring method of panoramic camera and panoramic camera applying same
CN110884672A (en) Auxiliary landing device of panoramic imaging helicopter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination