WO2019100219A1 - Output image generation method, device and unmanned aerial vehicle - Google Patents

Output image generation method, device and unmanned aerial vehicle Download PDF

Info

Publication number
WO2019100219A1
WO2019100219A1 PCT/CN2017/112202 CN2017112202W WO2019100219A1 WO 2019100219 A1 WO2019100219 A1 WO 2019100219A1 CN 2017112202 W CN2017112202 W CN 2017112202W WO 2019100219 A1 WO2019100219 A1 WO 2019100219A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
photographing device
posture
photographing
aircraft
Prior art date
Application number
PCT/CN2017/112202
Other languages
French (fr)
Chinese (zh)
Inventor
张明磊
马岳文
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2017/112202 priority Critical patent/WO2019100219A1/en
Priority to CN201780026914.7A priority patent/CN109076173A/en
Publication of WO2019100219A1 publication Critical patent/WO2019100219A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the present application relates to the field of UAV application technologies, and in particular, to an output image generation method, device, and drone.
  • Digital Orthophoto Map is a digital aerial image/remote sensing image (monochrome/color) that is scanned and processed by digital elevation model. The projection difference is corrected by pixel, and then image mosaic. The image generated by stitching according to the range of the frame. Since the image uses a real terrain surface as a mosaic projection surface, it has real geographic coordinate information, and the true distance can be measured on the image.
  • a higher flying height is usually used in order to obtain a more orthographic aerial image.
  • a longer focal length is required.
  • Camera small angle of view
  • the accuracy of the front intersection is related to the intersection angle (the greater the intersection angle is, the higher the accuracy is in a certain range)
  • the intersection angle of the object point obtained by the telephoto camera intersection is smaller.
  • the geometric accuracy is low, especially in the elevation direction.
  • the inaccuracy in the elevation direction will result in the accuracy of the final digital orthophoto image, and it is necessary to ensure that the image captured by the telephoto camera meets a high overlap rate and needs to shoot a large number of images. Calculated the cost.
  • the embodiment of the invention provides an output image generation method, device and an unmanned person to improve the accuracy and precision of the output image.
  • a first aspect of the present invention provides a method for generating an output image, including:
  • the second image is projected and stitched on the terrain surface to obtain an output image.
  • a second aspect of the present invention provides a method for generating an output image, including:
  • the second image is projected and stitched on the terrain surface to obtain an output image.
  • a third aspect of the embodiments of the present invention provides a ground station, including:
  • a communication interface one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
  • the communication interface is configured to: acquire a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where the FOV of the first photographing device is greater than or Equal to the preset threshold;
  • the processor is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured, and a position and a posture of the second photographing device when the second image is photographed;
  • the processor is configured to generate a terrain surface based on a position and a posture of the first image and the first photographing device when the first image is captured;
  • the processor is further configured to: perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
  • a fourth aspect of the embodiments of the present invention provides a ground station, including:
  • a communication interface one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
  • the communication interface is configured to: acquire a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where the FOV of the first photographing device is greater than or Is equal to a preset threshold, wherein a photographing interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft with respect to the ground;
  • the processor is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured, and a position and a posture of the second photographing device when the second image is photographed;
  • the processor is configured to generate a terrain surface based on a position and a posture of the first image and the first photographing device when the first image is captured;
  • the processor is configured to: perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
  • a fifth aspect of the embodiments of the present invention provides an aircraft controller, including:
  • a communication interface one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
  • the communication interface is configured to: acquire a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where the FOV of the first photographing device is greater than or Equal to the preset threshold;
  • the processor is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured, and a position and a posture of the second photographing device when the second image is photographed;
  • the processor is configured to generate a terrain surface based on a position and a posture of the first image and the first photographing device when the first image is captured;
  • the processor is further configured to: perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
  • a sixth aspect of the embodiments of the present invention provides an aircraft controller, including:
  • a communication interface one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
  • the communication interface is configured to: acquire a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where the FOV of the first photographing device is greater than or Is equal to a preset threshold, wherein a photographing interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft with respect to the ground;
  • the processor is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured, and a position and a posture of the second photographing device when the second image is photographed;
  • the processor is configured to generate a terrain surface based on a position and a posture of the first image and the first photographing device when the first image is captured;
  • the processor is configured to: perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
  • a seventh aspect of the embodiments of the present invention provides a computer readable storage medium, comprising instructions, when executed on a computer, causing a computer to execute the output image generating method according to the first aspect or the second aspect described above
  • An eighth aspect of the embodiments of the present invention provides a drone, including:
  • a power system mounted to the fuselage for providing flight power
  • a first photographing device and a second photographing device are mounted on the body for capturing an image, wherein an FOV of the first photographing device is greater than or equal to a preset threshold;
  • the first image obtained by the first photographing device with the FOV of the aircraft being greater than or equal to the preset threshold is acquired, and the second image obtained by the second photographing device carried by the aircraft is obtained, and based on a preset algorithm, Calculating a position and a posture of the first photographing device when the first image is captured and a position and a posture of the second photographing device when the second image is photographed; and determining a position and a posture when the first image is taken based on the first image and the first photographing device Generating a terrain surface; and based on the position and posture of the second photographing device when the second image is captured, the second image is projected and stitched on the surface of the terrain to obtain an output image.
  • the large FOV of the first photographing device in the embodiment of the present invention Or equal to a preset threshold, and the larger the FOV, the higher the accuracy of the elevation surface (ie, the terrain surface) obtained based on the image fitting captured by the first photographing device, thereby projecting images taken by other photographing devices on the aircraft to the
  • the corresponding high-precision orthophotos can be obtained on the elevation surface, which improves the accuracy of generating orthophotos.
  • FIG. 1 is a flowchart of a method for generating an output image according to the present invention
  • FIG. 2 is a schematic diagram of a connection between a ground station and an aircraft according to an embodiment of the present invention
  • 3a and 3b are schematic diagrams showing output images of two identical scenes provided by the present invention.
  • 4a and 4b are schematic diagrams of output images in two identical scenarios according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a method for generating an output image according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of a method for generating an output image according to an embodiment of the present invention.
  • Figure 7a is an unspliced near-infrared image obtained by a near-infrared camera
  • Figure 7b is a near-infrared output image obtained after the near-infrared image shown in Figure 7a is obtained by ortho-splicing;
  • Figure 7c is a visible light output image corresponding to a visible light camera photographed synchronously with the near infrared camera of Figure 7a;
  • NVDI index map calculated by using a near-infrared image and a red band image of a visible light output image
  • 7e is a schematic diagram showing the result of pseudo color rendering using green for the near-infrared image after ortho-splicing
  • FIG. 8 is a flowchart of a method for generating an output image according to an embodiment of the present invention.
  • FIG. 9 is a flowchart of a method for generating an output image according to an embodiment of the present invention.
  • 10a-10b are schematic diagrams of two shooting intervals provided by an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a ground station according to an embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of a ground station according to an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of an aircraft controller according to an embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of an aircraft controller according to an embodiment of the present invention.
  • a component when referred to as being "fixed” to another component, it can be directly on the other component or the component can be present. When a component is considered to "connect” another component, it can be directly connected to another component or possibly a central component.
  • Embodiments of the present invention provide an output image generation method, which may be performed by a ground station or a controller mounted on a drone.
  • the following embodiment is a detailed description of the ground station.
  • the implementation manner of the controller is similar to that of the ground station, and is not described in this embodiment.
  • FIG. 1 is a flowchart of a method for generating an output image according to the present invention. As shown in FIG. 1 , the method in this embodiment includes:
  • Step 101 Obtain a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where a field of view (FOV) of the first photographing device is greater than Or equal to the preset threshold.
  • FOV field of view
  • the ground station in this embodiment is a device having a computing function and/or processing capability, and the device may specifically be a remote controller, a smart phone, a tablet computer, a laptop computer, a watch, a wristband, and the like, and combinations thereof.
  • the aircraft in this embodiment may specifically be a drone equipped with a photographing device, a helicopter, a manned fixed-wing aircraft, a hot air balloon, or the like.
  • the first photographing device may be a photographing device whose FOV is greater than or equal to a preset threshold (for example, a wide-angle camera with an FOV greater than or equal to a preset threshold).
  • a preset threshold for example, a wide-angle camera with an FOV greater than or equal to a preset threshold.
  • the size of the preset threshold may be set according to requirements, which is not limited in this embodiment.
  • the visible light image is taken when the first photographing device is a wide-angle camera.
  • the FOV of the second photographing device is smaller than the FOV of the first photographing device.
  • the second photographing device may be a photographing device with a FOV smaller than the preset threshold (for example, a telephoto camera with a FOV smaller than a preset threshold).
  • the second photographing device may also be a near-infrared camera or an infrared camera, and when the second photographing device is a near-infrared camera or an infrared camera, its FOV may be greater than, less than, or equal to the FOV of the first photographing device.
  • the second photographing device when the second photographing device is a near-infrared camera, the second photographing device captures the near-infrared image, when the second photographing device is an infrared camera, the infrared image is captured, and when the second photographing device is the telephoto camera, the visible image is captured. .
  • the ground station 21 and the aircraft 22 can be connected through an Application Programming Interface (API) 23, but are not limited to being connected through an API.
  • API Application Programming Interface
  • the ground station 21 and the aircraft 22 can be connected by wire or wirelessly, for example, by connecting: WIreless-Fidelity (WI-FI), Bluetooth, software defined radio (software defined radio, Referred to as SDR) or other custom protocols.
  • WI-FI WIreless-Fidelity
  • Bluetooth software defined radio
  • SDR software defined radio
  • the aircraft can perform automatic cruising and photographing according to a predetermined route, and can also perform cruising and photographing under the control of the ground station.
  • the first photographing device and the second photographing device may perform shooting according to a preset fixed shooting interval (photographing time or shooting distance), or may be based on a relative flying height between the aircraft and the surface according to a preset strategy.
  • a suitable shooting interval such as when the flying height of the aircraft from the surface is high, using a relatively large shooting interval, when the flying height of the aircraft from the surface is low, using a relatively small shooting interval Shooting.
  • the preset image coincidence ratio is satisfied between the images taken at the adjacent time.
  • this is only an example, and the actual scene is not limited to ensuring the image reclosing ratio by the above method.
  • the ground station can obtain images obtained by the first photographing device and the second photographing device by using the following possible manners:
  • the aircraft will make the first shot through its API with the ground station.
  • the images obtained by the camera and the second shooting device are sent to the ground station in real time.
  • the aircraft transmits the images obtained by the first photographing device and the second photographing device in a preset time interval to the ground station according to a preset time interval.
  • the aircraft collectively transmits the images obtained by the first photographing device and the second photographing device during the entire cruise to the ground station.
  • the aircraft may transmit the images captured by the first photographing device and the second photographing device to the ground station in the form of code stream data, or may be sent to the ground station in the form of thumbnails, but according to the aircraft and the ground.
  • the computing power of the station is not specifically limited to the resolution of the returned stream data or thumbnails, and may be the original image.
  • taking the form of a thumbnail as an example when the image is sent to the ground station in the form of a thumbnail, the ground station can display the received thumbnail so that the user can clearly see the image obtained by the real-time shooting. .
  • Step 102 Calculate a position and a posture of the first photographing device when the first image is captured and a position and a posture of the second photographing device when the second image is captured, according to a preset algorithm.
  • the ground station may calculate a position and a posture when the first imaging device captures the first image, and a second preset image processing algorithm, based on the first preset image processing algorithm.
  • the position and posture of the second photographing device when the second image is taken are calculated.
  • the first preset image processing algorithm and the second preset processing algorithm may be the same or different.
  • the first preset image processing algorithm and the second preset processing algorithm in the embodiment may be any of the following algorithms.
  • the position and posture of the first imaging device when the first image is captured can be calculated by using the image control point as a constraint condition.
  • the first photographing device and the second photographing device are simultaneously photographed, and the ground station calculates the first photographing device based on a preset image processing algorithm (air triangulation algorithm, SFM algorithm or SLAM algorithm, etc.) The position and posture when the first image was taken. Calculating the second beat based on the relative positional relationship between the first photographing device and the second photographing device that are pre-calibrated The position and posture of the camera when shooting the second image.
  • a preset image processing algorithm air triangulation algorithm, SFM algorithm or SLAM algorithm, etc.
  • the position and posture obtained by the SLAM algorithm, the aerial triangulation algorithm or the SFM algorithm in the prior art are relative positions and relative postures in the shooting scene.
  • Position and pose for the world coordinate system :
  • the ground station can use the GPS measuring device on the aircraft to acquire the GPS information of the aircraft.
  • the GPS information can be provided by the real-time dynamic control system (RTK), and the relative position obtained by the above calculation is The pose is converted to position and pose in the world coordinate system.
  • RTK real-time dynamic control system
  • the ground station converts the position and posture obtained by the above calculation into the position and posture in the world coordinate system based on the GPS information of the preset image control point.
  • the relative position of the image control point in the first image captured by the first photographing device and the second image captured by the image capture point in the second photographing device may be manually found.
  • the relative position based on the relative position of the image control point and the GPS information of the image control point, converts the relative position and posture obtained by the above calculation into the position and posture in the world coordinate system.
  • the image recognition by means of image recognition, firstly, based on the information of the GPS of the image control point, respectively searching for the image including the image control point from the first image captured by the first photographing device and the second image captured by the second photographing device, and There may be an area of the image control point in the image. Further, the image control point is identified in the above area by a preset machine learning model and an optimization algorithm, thereby obtaining the image control point in the first image and the second image.
  • the relative position further, based on the relative position of the image control points and the GPS information, converts the relative positions and postures of the first photographing device and the second photographing device when the image is captured into the position and posture in the world coordinate system.
  • the above image recognition method improves the output image generation efficiency compared to the manual method.
  • the embodiment may further display the relative positions of the image control points in the first image and/or the second image. To improve the user experience.
  • the aircraft is at the first and second shooting devices
  • the captured image is sent to the ground station and the GPS information of the aircraft is also sent to the ground station when the image is taken.
  • the ground station converts the calculated relative position and posture into the position and posture in the world coordinates according to the GPS information corresponding to the image.
  • the relative position and posture of the first photographing device when the first image is captured may be first converted into the world by any one of the above manners.
  • the position and posture in the coordinate system and further, based on the relative positional relationship between the first camera and the second camera, which are pre-calibrated, convert the relative position and posture of the second camera when the second image is captured, Position and attitude in the world coordinate system.
  • Step 103 Generate a terrain surface based on the position and posture of the first image and the first photographing device when the first image is captured.
  • the ground station performs dense matching according to the position and posture of the first photographing device when photographing the first image, and generates a corresponding dense point cloud or a semi-dense point cloud, and further, a point generated based on the dense matching.
  • Clouds form and form a terrain surface.
  • the point cloud generated by the dense matching may be first divided into ground points and non-ground points, and then the ground points are extracted from the point cloud, based on the grounds. Point fitting forms the terrain surface.
  • this is merely an example, and is not a limitation of the present invention.
  • a digital surface model pre-stored in a ground station may be used as a terrain surface, and then the first photographing device may be photographed. The first image and the second image captured by the second photographing device are projected onto the DSM.
  • a more accurate terrain surface can be calculated by using a preset image control point as a constraint.
  • Step 104 Perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
  • the first image and the second image may be projected based on the relative position and posture of the first photographing device when the first image is captured, and the relative position and posture of the second photographing device when the second image is captured.
  • the position of the terrain surface may be based on the position and posture of the first photographing device in the world coordinate system when the first image is captured, and the position and posture of the second photographing device in the world coordinate system when the second image is captured. Will first image and second The image is projected onto the surface of the above terrain.
  • the splicing method of the projection of the first image and the second image in the embodiment may be one of the following methods: a direct overlay method, a panoramic image splicing method, and each region of the final image is selected from the image center. Image methods, and stitching methods based on cost functions.
  • the second image projection may also be spliced based on the splicing line when the first image is projected and spliced.
  • a splicing method based on a cost function is taken as an example, and the splicing method is as follows:
  • the ground station may first perform dense matching based on the position and posture of the first photographing device when taking the first image to generate a dense point cloud or a semi-dense point cloud, based on the second image on the terrain surface.
  • the projection on the top, and the point cloud generated above construct a cost function, and splicing the projection of the second image based on the cost function, so that the color difference on both sides of the splicing line is minimized.
  • the splicing method of the first image projection is similar to the second image and will not be described here.
  • the ground station may first construct a cost function based on the projection of the second image on the surface and the point cloud obtained based on the second image, and then perform the projection of the second image based on the cost function. splice.
  • the ground station can process the received image by using the following two working modes:
  • the ground station processes the received image in a ready-to-go process. That is to say, when the ground station is in the cruising of the aircraft, the received image is processed to obtain a semi-dense point cloud or a dense point cloud of the image. In this way, the ground station updates the semi-dense point cloud, dense point cloud or sparse point cloud obtained by the processing for each image received.
  • the above-mentioned processing method is not only the processing method included in the literal meaning, but depends on the processing speed of the ground station. If the processing speed of the ground station can support the reception or processing, then The ground station processes the image immediately after receiving the image.
  • the ground station sequentially processes the received image. Specifically, the ground station can follow the image.
  • the receiving sequence is processed, and may be processed according to the storage order of the images, and may also be processed according to other customized processing sequences, and is not performed in this embodiment. Specifically limited.
  • the global color adjustment and/or brightness adjustment of the calculated point cloud may be first performed to achieve the purpose of significantly improving image quality, and further, based on the adjusted Projecting the image, constructing the cost function by using the distance from the projected pixel to the photographing device as a constraint, and stitching the projection of the image onto the surface of the terrain based on the cost function, so that the color difference on both sides of the stitching line is minimized, so that the integrity can be obtained.
  • the cost function by using the distance from the projected pixel to the photographing device as a constraint, and stitching the projection of the image onto the surface of the terrain based on the cost function, so that the color difference on both sides of the stitching line is minimized, so that the integrity can be obtained.
  • the non-ground point in the point cloud may also be excluded, so that the splicing is performed.
  • the line can automatically avoid non-ground areas, resulting in a better visual output image.
  • FIG. 3a and FIG. 3b are schematic diagrams of output images of two identical scenes provided by the present invention, wherein FIG. 3a is an output image obtained by using an estimated elevation surface as a projection surface, and FIG. 3b is formed by a point cloud fitting.
  • the output image obtained by the terrain surface as the projection surface is spliced by the cost function method.
  • the output image of Fig. 3a produces a severe stitching misalignment because the elevation surface cannot accurately fit the terrain surface.
  • Fig. 3a since the average elevation surface is used as the projection surface, the output image of Fig. 3a produces a severe stitching misalignment because the elevation surface cannot accurately fit the terrain surface.
  • the terrain surface formed by the point cloud fitting is used as the projection surface, the terrain surface can be fitted more accurately, and the cost function can be used to minimize the chromatic aberration on both sides of the splicing line, so the output is obtained. There is no obvious dislocation phenomenon in the image, and the overall output image is better overall. Therefore, in the embodiment of the present invention, the terrain surface is fitted by the point cloud, and the cost function is used to perform the splicing processing, which can solve the problem of the output image mosaic misalignment.
  • the first image and the first image on the surface of the terrain may also be based on a preset strategy. / or projection of the second image for color and brightness adjustment. This enables a better stitching effect in the subsequent stitching process.
  • FIG. 4a and FIG. 4b are outputs of two identical scenarios according to an embodiment of the present invention.
  • the projection on the terrain surface in Figure 4a is not processed by color and brightness. Therefore, the overall output image in Figure 4a is not very good in color and brightness, and the visual effect is poor, but in Figure 4b Because the brightness and color of the projection on the surface of the terrain are processed before the splicing, the obtained output image has better overall color and brightness, and the visual effect is better. Therefore, the embodiment of the present invention can effectively improve the visual effect of the output image by performing color and brightness processing on the projection on the terrain surface before the splicing process.
  • the output image involved in this embodiment may be specifically an orthophoto, such as an orthophoto map or other image with real geographic coordinate information obtained according to orthographic projection.
  • the first image obtained by the first photographing device with the FOV of the aircraft being greater than or equal to the preset threshold is acquired, and the second image obtained by the second photographing device carried by the aircraft is obtained, and is calculated based on a preset algorithm.
  • a position and a posture of the first photographing device when the first image is captured and a position and a posture of the second photographing device when the second image is captured based on the position and posture of the first image and the first photographing device when the first image is photographed, Generating a terrain surface; and based on the position and posture of the second photographing device when the second image is captured, the second image is projected and stitched on the surface of the terrain to obtain an output image.
  • the FOV of the first photographing device is greater than or equal to the preset threshold in the embodiment, and the FOV is larger, the accuracy of the elevation surface (ie, the terrain surface) obtained by the image fitting based on the image captured by the first photographing device is higher, thereby The images captured by other shooting devices on the aircraft are projected onto the elevation surface to obtain a correspondingly accurate orthophoto image, which improves the accuracy of generating orthophotos.
  • FIG. 5 is a flowchart of a method for generating an output image according to an embodiment of the present invention. As shown in FIG. 5, based on the embodiment of FIG. 1, the method includes:
  • Step 501 Acquire a first visible light image obtained by the first photographing device mounted on the aircraft, and acquire a second visible light image obtained by the second photographing device mounted on the aircraft, where the first photographing device and the second photographing device are synchronized. Shooting.
  • the first photographing device may be specifically a wide-angle camera
  • the second photographing device is specifically a telephoto camera
  • Step 502 Calculate a position and a posture of the first photographing device when the first visible light image is captured based on a preset image processing algorithm.
  • Step 503 Calculate a position and a posture of the second photographing device when the second visible light image is captured, based on a relative positional relationship between the first photographing device and the second photographing device that are pre-calibrated.
  • Step 504 Generate a terrain surface based on the position and posture of the first visible light image and the first photographing device when the first visible light image is captured.
  • Step 505 Perform projection and splicing processing on the second visible light image on the terrain surface based on the position and posture of the second photographing device when the second visible light image is captured, to obtain a corresponding corresponding to the second photographing device. Visible light output image.
  • the method for splicing the projection of the second visible light image on the surface of the terrain includes:
  • the projection of the second visible light image on the terrain surface may be spliced based on the splicing line used for splicing the projection of the first visible light image on the terrain surface, and the second shooting device is obtained. Visible light output image.
  • a dense matching may be performed based on a position and a posture of the first photographing device when the first visible light image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; and then based on the second visible light image. Projecting on the surface of the terrain, and the point cloud generated by the dense matching, constructing a cost function; and splicing the projection of the second visible image on the terrain surface based on the cost function to obtain a visible light output image corresponding to the second imaging device .
  • a dense matching may be performed based on a position and a posture of the second photographing device when the second visible light image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; and then based on the second visible light image Projecting on the surface of the terrain, and the point cloud generated by the dense matching, constructing a cost function, splicing the projection of the second visible image on the surface of the terrain based on the cost function, and obtaining visible light corresponding to the second photographing device Output image.
  • the first visible light image is projected onto the terrain surface based on the position and posture of the first imaging device when the first visible light image is captured, and the projection based on the second visible light image on the terrain surface is further Performing image correction processing on the projection of the first visible light image on the first terrain surface, thereby obtaining a visible light output image corresponding to the first photographing device based on the image corrected processing projection, and based on the stitching line on the visible light output image, Splicing the projection of the second image on the surface of the terrain.
  • the orthographic visible light output image may be obtained by orthographic processing the projection of the second visible light image on the terrain surface.
  • the first photographing device is specifically a wide-angle camera
  • the second photographing device is specifically a telephoto camera
  • the FOV of the first photographing device is larger
  • the FOV of the second photographing device is smaller, therefore, based on the first
  • the position and posture of the shooting device when capturing the first visible light image the accuracy and accuracy of the obtained elevation surface (ie, the terrain surface) is higher than the elevation obtained based on the position and posture of the second imaging device when capturing the second visible light image. Therefore, the second visible light image obtained by the second photographing device is projected onto the elevation surface to obtain an orthophoto image with high accuracy and precision, and the accuracy of the orthophoto image obtained by the second photographing device is improved.
  • FIG. 6 is a flowchart of a method for generating an output image according to an embodiment of the present invention. As shown in FIG. 6 , on the basis of the embodiment of FIG. 1 , the method includes:
  • Step 601 Acquire a visible light image obtained by the first photographing device mounted on the aircraft, and a near-infrared image captured by the second photographing device, and the first photographing device and the first photographing device simultaneously photograph.
  • the first photographing device may be specifically a visible light camera with a FOV greater than or equal to a preset threshold (such as a wide-angle camera with an FOV greater than or equal to a preset threshold), and the second photographing device may be specifically a near-infrared camera.
  • a preset threshold such as a wide-angle camera with an FOV greater than or equal to a preset threshold
  • the second photographing device may be specifically a near-infrared camera.
  • Step 602 Calculate a position and a posture of the first photographing device when the visible light image is captured based on a preset image processing algorithm.
  • Step 603 Calculate a position and a posture of the second photographing device when photographing the near-infrared image based on a relative positional relationship between the first photographing device and the second photographing device that are pre-calibrated.
  • Step 604 Generate a terrain surface based on the visible light image and the position and posture of the first photographing device when the visible light image is captured.
  • Step 605 Perform projection and splicing processing on the near-infrared image on the terrain surface based on a position and a posture of the second photographing device when the near-infrared image is captured.
  • the method for splicing the projection of the near-infrared image on the surface of the terrain includes:
  • the ground station can splicing the projection of the visible light image on the terrain surface to obtain an orthographic visible light output image, and based on the splicing line of the visible light image on the terrain surface, the near infrared The projection of the image on the surface of the terrain is spliced to obtain a near-infrared output image.
  • the visible light output image corresponding to the first photographing device and the near infrared output image corresponding to the second photographing device are obtained, and the visible light output image and/or the near infrared output image may be displayed on the ground station in this embodiment.
  • the visible light output image and/or the near infrared output image may be displayed on the ground station in this embodiment.
  • the position and posture of the first photographing device when capturing the visible light image may be densely matched to generate a corresponding dense point cloud or a semi-dense point cloud; and then photographed based on the second photographing device.
  • the projection of the near-infrared image on the surface of the terrain and the point cloud generated by the dense matching described above construct a cost function; and based on the cost function, the projection of the near-infrared image on the terrain surface is spliced, and the corresponding corresponding to the second photographing device is obtained. Infrared output image.
  • the ground station performs dense matching based on the position and posture of the second photographing device when capturing the near-infrared image, and generates a corresponding dense point cloud or a semi-dense point cloud, and based on the
  • the feature is based on the cost function to splicing the projection of the near-infrared image on the surface of the terrain to obtain an orthographic near-infrared output image.
  • the embodiment further calculates the vegetation coverage index (NDVI) and/or the strong vegetation index (EVI) based on the visible light output image and the near-infrared output image obtained above, and obtains the NDVI and/or EVI based on the calculation. , draw the corresponding index map, and display the index map.
  • NDVI vegetation coverage index
  • EVI strong vegetation index
  • the analysis result may also analyze the growth state of the vegetation based on the index map, and output the analysis result.
  • the purpose of providing vegetation analysis data is thus provided, which facilitates vegetation analysis.
  • FIG. 7a is an unspliced near-infrared image obtained by a near-infrared camera
  • FIG. 7b is a near-infrared output image obtained by ortho-splicing of the near-infrared image shown in FIG. 7a
  • FIG. 7c is a near-infrared image with FIG. 7a.
  • FIG. 7d is an NVDI index image calculated by using the red band image of the near-infrared image and the visible light output image
  • FIG. 7e is a green image of the near-infrared image after the orthophoto stitching. Schematic diagram of the results of pseudo color rendering.
  • the present embodiment is an indicator for analyzing the vegetation growth status of NDVI and EVI index crops
  • the actual scene is not limited to the use of NDVI and EVI, but also NDVI and The EVI is replaced with other indicators that can be used to analyze the state of vegetation growth, and is not specifically limited in this embodiment.
  • the visible light image and the near-infrared image are simultaneously acquired during image acquisition, and the NDVI and/or EVI index is calculated according to different responses of the plant to the two spectra, and the NDVI and/or EVI index is used as an important basis for classifying vegetation, and the NDVI and/or EVI index are used as an important basis for classifying vegetation.
  • the reliability of point cloud classification is used as an important basis for classifying vegetation.
  • the aircraft can also be equipped with a wide-angle camera, a telephoto camera, and a near-infrared camera, wherein the method for processing the image captured by the telephoto camera and the near-infrared camera based on the image captured by the wide-angle camera and the foregoing
  • a wide-angle camera e.g., a telephoto camera
  • a near-infrared camera e.g., a near-infrared camera
  • FIG. 8 is a flowchart of a method for generating an output image according to an embodiment of the present invention. As shown in FIG. 8 , based on the embodiment of FIG. 1 , the method includes:
  • Step 801 Acquire a visible light image obtained by the first photographing device mounted on the aircraft, and acquire an infrared image obtained by the second photographing device mounted on the aircraft, and the first photographing device and the second photographing device simultaneously photograph.
  • the first photographing device may be specifically a visible light camera with a FOV greater than or equal to a preset threshold (such as a wide-angle camera with a FOV greater than or equal to a preset threshold), and the second photographing device may be specifically an infrared camera.
  • a preset threshold such as a wide-angle camera with a FOV greater than or equal to a preset threshold
  • the second photographing device may be specifically an infrared camera.
  • Step 802 Calculate a position and a posture of the first photographing device when the visible light image is captured based on a preset image processing algorithm.
  • Step 803 Calculate a position and a posture of the second photographing device when the infrared image is captured based on a relative positional relationship between the first photographing device and the second photographing device that are pre-calibrated.
  • Step 804 Generate a terrain surface based on the visible light image and the position and posture of the first photographing device when the visible light image is captured.
  • Step 805 Perform projection and splicing processing on the infrared image on the terrain surface to obtain an infrared output image corresponding to the second photographing device, based on the position and posture of the second photographing device when the infrared image is captured.
  • the method for splicing the projection of the infrared image on the surface of the terrain includes:
  • the ground station splicing the projection of the visible light image on the terrain surface to obtain a visible light output image, and stitching the projection of the infrared image based on the splicing line of the projection of the visible light image on the terrain surface. Obtain an infrared output image.
  • the position and posture of the first photographing device when capturing the visible light image may be densely matched to generate a corresponding dense point cloud or a semi-dense point cloud; and then photographed based on the second photographing device.
  • the projection of the infrared image on the surface of the terrain and the point cloud generated by the dense matching described above construct a cost function; and based on the cost function, the projection of the infrared image on the terrain surface is spliced to obtain the infrared output image corresponding to the second imaging device.
  • the ground station performs dense matching based on the position and posture of the second photographing device when capturing the infrared image, and generates a corresponding dense point cloud or semi-dense point cloud, and based on the infrared image in the terrain. Projection on the surface, and points generated by dense matching The cloud constructs a cost function and splicing the projection of the infrared image on the terrain surface based on the cost function to obtain an infrared output image.
  • the visible light output image and/or the infrared output image may be displayed on the ground station to improve the user experience.
  • the embodiment can also identify the heat source object from the infrared image captured by the infrared camera or the infrared output image corresponding to the infrared camera based on the characteristics of the infrared image (such as photovoltaic panels, power lines, etc.).
  • the power line is difficult to recognize on the visible light image of ordinary aerial photography because of its small diameter.
  • the present embodiment can easily recognize the infrared image from the aerial image through the infrared image based on the characteristics of the power line heating. Identify the purpose of the power line.
  • the embodiment can use the position and posture of the second imaging device when capturing the infrared image, and the preset power line mathematical model.
  • the identified power line is modeled to form a power line layer, and the power line layer is superimposed and displayed on the visible light output image obtained above.
  • the infrared line is mounted on the aircraft, and the infrared line image obtained by the infrared camera or the infrared output image corresponding to the infrared camera is used to identify the power line, and then the modeling method is adopted.
  • the identified power line is modeled, a power line layer is generated, and the power line map is layered on the visible light output image, thereby realizing clear display of the power line on the orthophoto, and obtaining specific information of the power line by measurement.
  • the aircraft can also be equipped with a wide-angle camera, a telephoto camera, and an infrared camera at the same time, wherein the method for processing the image captured by the telephoto camera and the infrared camera based on the image captured by the wide-angle camera and the foregoing embodiment Similar, no longer repeat here.
  • FIG. 9 is a flowchart of a method for generating an output image according to an embodiment of the present invention, as shown in FIG. It is shown that, based on the embodiment of Figure 1, the method comprises:
  • Step 901 Acquire a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where the FOV of the first photographing device is greater than or equal to a preset threshold.
  • the photographing interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft with respect to the ground.
  • the first photographing device and the second photographing device perform photographing at the same photographing interval in the horizontal direction.
  • the shooting interval of the first photographing device and the second photographing device changes.
  • the first photographing device and the second photographing device are photographed in a horizontal direction at time-lapse shooting intervals, wherein The photographing interval is associated with a pre-configured image overlap rate and the relative height of the aircraft to the surface.
  • the shooting interval may be correspondingly increased when the height is increased relative to the ground surface, and the shooting interval is correspondingly reduced when the height is reduced relative to the ground surface.
  • Step 902 Calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured and a position and a posture of the second photographing device when the second image is photographed.
  • Step 903 Generate a terrain surface based on the position and posture of the first image and the first photographing device when the first image is captured.
  • Step 904 Perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
  • the embodiment of the invention provides a ground station, which may be the ground station described in the above embodiment.
  • 11 is a schematic structural diagram of a ground station according to an embodiment of the present invention.
  • the ground station 10 includes: a communication interface 11, one or more processors 12; and one or more processors work independently or in cooperation.
  • the interface 11 is connected to the processor 12; the communication interface 11 And configured to: obtain a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where the FOV of the first photographing device is greater than or equal to a preset threshold.
  • the processor 12 is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured and a position of the second photographing device when the second image is photographed And the processor 12 is configured to: generate a terrain surface based on the position and posture of the first image and the first photographing device when the first image is captured; the processor 12 is further configured to: And based on the position and posture of the second photographing device when the second image is captured, the second image is projected and stitched on the terrain surface to obtain an output image.
  • the FOV of the first photographing device is greater than the FOV of the second photographing device.
  • the FOV of the second photographing device is less than the preset threshold.
  • the processor 12 is configured to: determine, according to GPS information of a preset image control point, a position and a posture of the first photographing device when the first image is captured, and the second photographing The position and posture of the device when the second image is captured are converted into a position and a posture in the world coordinate system.
  • the processor 12 is configured to: determine, according to GPS information of a preset image control point, a relative position of the image control point in the first image captured by the first photographing device; Determining a relative position of the image control point in the first image, and GPS information of the image control point, converting the position and posture of the first photographing device when the first image is captured into a world coordinate system Position and posture; converting the position and posture of the second photographing device when photographing the second image into world coordinates based on a relative positional relationship between the first photographing device and the second photographing device that are pre-calibrated Position and posture under the system.
  • the ground station further includes a display component 13 communicatively coupled to the processor 12, the display component 13 configured to: display the relative position of the image control point on the first image.
  • the processor 12 is configured to: calculate, by using a motion recovery structure SFM algorithm, a position and a posture of the first photographing device when the first image is captured, by using a preset image control point as a constraint condition; And calculating a position and a posture of the second photographing device when the second image is captured based on a relative positional relationship between the first photographing device and the second photographing device that is pre-calibrated.
  • the processor 12 is configured to perform a dense matching based on a position and a posture of the first photographing device when the first image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; The resulting point cloud fits to form a terrain surface.
  • the processor 12 is configured to: extract a ground point from a point cloud generated by the dense matching;
  • a terrain surface is formed based on the extracted ground point fit.
  • the processor 12 further includes: performing global color and/or brightness adjustment on a projection of the second image on the surface.
  • the preset image processing algorithm includes any one of the following: an aerial triangulation, an algorithm for recovering a structure SFM from motion, a real-time positioning, and a map construction SLAM algorithm.
  • the output image includes an orthophoto.
  • the shooting interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft relative to the ground.
  • the first photographing device and the second photographing device respectively photograph at the same photographing interval in the horizontal direction.
  • the shooting interval of the first photographing device and the second photographing device changes.
  • the first photographing device and the second photographing device are photographed in a horizontal direction at time-lapse photographing intervals, wherein the photographing interval is in advance.
  • the ground station provided in this embodiment can perform the technical solution of the embodiment of FIG. 1 , and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • Embodiments of the present invention provide a ground station.
  • the ground station is based on the embodiment of FIG. 11 , and the communication interface 11 is configured to: acquire a first visible light image obtained by the first imaging device mounted on the aircraft, and acquire a second captured by the second imaging device mounted on the aircraft; The visible light image, the first photographing device and the second photographing device are simultaneously photographed.
  • the processor 12 is configured to: calculate, according to a preset image processing algorithm, that the first photographing device is photographing the first a position and a posture of the visible light image; calculating a position and a posture of the second photographing device when the second visible light image is captured based on a relative positional relationship between the first photographing device and the second photographing device .
  • the processor 12 is configured to: perform a projection on the terrain surface of the second visible light image based on a splicing line used when splicing the projection of the first visible light image on the terrain surface Splicing, obtaining a visible light output image corresponding to the second photographing device.
  • the processor 12 is configured to perform a dense matching based on a position and a posture of the first photographing device when the first visible light image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the second visible light image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; stitching the projection of the second visible light image on the terrain surface based on the cost function Obtaining a visible light output image corresponding to the second photographing device.
  • the processor 12 is configured to perform a dense matching based on a position and a posture of the second photographing device when the second visible light image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the second visible light image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; stitching the projection of the second visible light image on the terrain surface based on the cost function Obtaining a visible light output image corresponding to the second photographing device.
  • the processor 12 is further configured to: project the first visible light image onto the terrain surface based on a position and a posture of the first photographing device when the first visible light image is captured.
  • the processor 12 is further configured to orthographically process the projection of the second visible light image on the terrain surface.
  • the first photographing device is a wide-angle camera
  • the second photographing device is a telephoto camera.
  • ground station provided by this embodiment can be used to perform the method of the embodiment of FIG. 5, and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • Embodiments of the present invention provide a ground station.
  • the ground station is based on the embodiment of Figure 11,
  • the communication interface 11 is configured to: acquire a visible light image captured by a first photographing device carried by an aircraft, and obtain a near-infrared image obtained by the second photographing device, the first photographing device and the first photographing The device shoots simultaneously.
  • the processor 12 is configured to calculate, according to a preset image processing algorithm, a position and a posture of the first photographing device when capturing the visible light image; and the first photographing device and the second based on pre-calibration The relative positional relationship between the photographing devices is used to calculate the position and posture of the second photographing device when the near-infrared image is captured.
  • the processor 12 is configured to: perform a splicing process on the projection of the visible light image on the surface of the terrain to obtain a visible light output image; and perform splicing based on the projection of the visible light image on the surface of the terrain
  • the stitching line is used to splicing the projection of the near-infrared image on the surface of the terrain to obtain a near-infrared output image.
  • the display component 13 is configured to: display the visible light output image and/or the near infrared output image.
  • the processor 13 is further configured to: calculate a vegetation coverage index NDVI and/or a strong vegetation index EVI based on the visible light output image and the near-infrared output image, and calculate the obtained NDVI and/or EVI, draw the corresponding index map.
  • the display component 13 is further configured to: display the index map.
  • the processor 13 is further configured to: analyze the growth status of the vegetation based on the index map, and output the analysis result.
  • the processor 12 is configured to perform a dense matching based on a position and a posture of the first photographing device when capturing the visible light image, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the infrared image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; splicing the projection of the near-infrared image on the surface of the terrain based on the cost function to obtain a near-infrared Output image.
  • the processor 12 is configured to perform a dense matching based on a position and a posture of the second photographing device when the near-infrared image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a near-infrared image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; stitching the projection of the near-infrared image on the surface of the terrain based on the cost function to obtain a near Infrared output image.
  • the first photographing device is a wide-angle camera
  • the second photographing device is near red Outside camera.
  • ground station provided in this embodiment can be used to perform the method in the embodiment of FIG. 6, and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • Embodiments of the present invention provide a ground station.
  • the ground station is based on the embodiment of FIG. 11 , and the communication interface 11 is configured to: acquire a visible light image obtained by the first imaging device mounted on the aircraft, and acquire an infrared image obtained by the second imaging device mounted on the aircraft.
  • the first photographing device and the second photographing device are simultaneously photographed.
  • the processor 12 is configured to calculate, according to a preset image processing algorithm, a position and a posture of the first photographing device when capturing the visible light image; and the first photographing device and the second based on pre-calibration The relative positional relationship between the photographing devices is used to calculate the position and posture of the second photographing device when the infrared image is captured.
  • the processor 12 is configured to: perform splicing processing on the projection of the visible light image on the terrain surface to obtain a visible light output image; and perform splicing based on the projection of the visible light image on the terrain surface
  • the stitching line used is used to splicing the projection of the infrared image to obtain an infrared output image.
  • the display component 13 is configured to: display the infrared output image and/or the visible light output image.
  • the processor 12 is configured to: identify a location of the heat source object in the infrared image captured by the second photographing device or the infrared output image.
  • the display component 13 is further configured to: display a position of the heat source object in the infrared image or the infrared output image.
  • the heat source object comprises a power line.
  • the processor 12 is configured to: according to the position and posture of the second photographing device when capturing the infrared image, and the preset power line mathematical model, model the identified power line to form a power line diagram
  • the display component 13 is configured to: superimpose and display the power line layer on the visible light output image.
  • the processor 12 is configured to: perform dense matching based on a position and a posture of the first photographing device when capturing the visible light image, to generate a corresponding dense point cloud or a semi-dense point cloud; Projection of the image on the surface of the terrain, and the dense matching Generating a point cloud to construct a cost function; splicing the projection of the infrared image on the surface of the terrain based on the cost function to obtain an infrared output image.
  • the processor 12 is configured to perform a dense matching based on a position and a posture of the second photographing device when the infrared image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; and stitching the projection of the infrared image on the surface of the terrain based on the cost function to obtain an infrared output image.
  • the first photographing device is a wide-angle camera
  • the second photographing device is an infrared camera
  • ground station provided by this embodiment can be used to perform the method of the embodiment of FIG. 8 , and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • FIG. 12 is a schematic structural diagram of a ground station according to an embodiment of the present invention.
  • the ground station 20 includes: a communication interface 21, one or more processors 22; and the one or more processors 22 are separately or cooperatively Working, the communication interface 21 is connected to the processor 22; the communication interface 21 is configured to: acquire a first image captured by a first photographing device mounted on the aircraft, and acquire a second photographing device mounted on the aircraft.
  • the processor 22 is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured and a position of the second photographing device when the second image is photographed And a gesture; the processor 22 is configured to: generate a terrain surface based on a position and a posture of the first image and the first photographing device when the first image is captured; 22: for performing projection and splicing processing on the second image on the terrain surface based on a position and a posture of the second photographing device when the second image is captured, to obtain an output image.
  • the first photographing device and the second photographing device perform photographing at the same photographing interval in the horizontal direction.
  • the first photographing device when the aircraft changes in height relative to the surface, the first photographing device The shooting interval of the second photographing device changes.
  • the first photographing device and the second photographing device are photographed in a horizontal direction at time-lapse photographing intervals, wherein the photographing interval is in advance.
  • ground station provided by this embodiment can be used to perform the method of the embodiment of FIG. 9 , and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • An embodiment of the present invention provides an aircraft controller, which may be the aircraft controller described in the above embodiments.
  • 13 is a schematic structural diagram of an aircraft controller according to an embodiment of the present invention. As shown in FIG. 13, the aircraft controller 30 includes: a communication interface 31, one or more processors 32; and one or more processors work alone or in cooperation.
  • the communication interface 31 is connected to the processor 32.
  • the communication interface 31 is configured to: acquire a first image captured by a first camera mounted on the aircraft, and acquire a second image obtained by the second camera mounted on the aircraft, where The FOV of the first photographing device is greater than or equal to a preset threshold; the processor 32 is configured to: calculate a position and a posture and a posture of the first photographing device when the first image is captured based on a preset algorithm a position and a posture of the second photographing device when the second image is captured; the processor 32 is configured to: based on the position of the first image and the first photographing device when photographing the first image, a gesture, generating a terrain surface; the processor 32 is further configured to: perform a second position on the terrain surface based on a position and a posture of the second photographing device when the second image is captured The image is projected and stitched to obtain an output image.
  • the FOV of the first photographing device is greater than the FOV of the second photographing device.
  • the FOV of the second photographing device is less than the preset threshold.
  • the processor 32 is configured to: position and posture of the first photographing device when the first image is captured, and the second photographing based on GPS information of a preset image control point The position and posture of the device when the second image is captured are converted into a position and a posture in the world coordinate system.
  • the processor 32 is configured to: determine, according to GPS information of a preset image control point, a relative position of the image control point in the first image captured by the first photographing device; Determining a relative position of the image control point in the first image, and the image control point GPS information, converting a position and a posture of the first photographing device when the first image is captured into a position and a posture in a world coordinate system; based on a pre-calibrated first photographing device and the second photographing device The relative positional relationship converts the position and posture of the second photographing device when the second image is captured into a position and a posture in the world coordinate system.
  • the processor 32 is configured to: use a motion recovery structure SFM algorithm to calculate a position and a posture of the first photographing device when the first image is captured, by using a preset image control point as a constraint condition; And calculating a position and a posture of the second photographing device when the second image is captured based on a relative positional relationship between the first photographing device and the second photographing device that is pre-calibrated.
  • the processor 32 is configured to perform a dense matching based on a position and a posture of the first photographing device when the first image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; The resulting point cloud fits to form a terrain surface.
  • the processor 32 is configured to: extract a ground point from a point cloud generated by the dense matching;
  • a terrain surface is formed based on the extracted ground point fit.
  • the processor 32 further includes: performing global color and/or brightness adjustment on a projection of the second image on the surface.
  • the preset image processing algorithm includes any one of the following: an aerial triangulation, an algorithm for recovering a structure SFM from motion, a real-time positioning, and a map construction SLAM algorithm.
  • the output image includes an orthophoto.
  • the shooting interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft relative to the ground.
  • the first photographing device and the second photographing device respectively photograph at the same photographing interval in the horizontal direction.
  • the shooting interval of the first photographing device and the second photographing device changes.
  • the first photographing device and the second photographing device are photographed in a horizontal direction at time-lapse photographing intervals, wherein the photographing interval is in advance Configured image overlap rate, as well as the aircraft and the surface Relatively highly correlated.
  • the aircraft controller provided in this embodiment can perform the technical solution of the embodiment of FIG. 1 , and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • Embodiments of the present invention provide an aircraft controller.
  • the aircraft controller is based on the embodiment of FIG. 13 , the communication interface 31 is configured to: acquire a first visible light image captured by a first imaging device mounted on the aircraft, and acquire a second captured image obtained by the second imaging device mounted on the aircraft The two visible light images are captured by the first photographing device and the second photographing device.
  • the processor 32 is configured to calculate, according to a preset image processing algorithm, a position and a posture of the first photographing device when the first visible light image is captured; and the first photographing device and the second based on the pre-calibration A relative positional relationship of the photographing device is calculated, and a position and a posture of the second photographing device when the second visible light image is captured are calculated.
  • the processor 32 is configured to: perform a projection on the terrain surface of the second visible light image based on a splicing line used when splicing the projection of the first visible light image on the terrain surface Splicing, obtaining a visible light output image corresponding to the second photographing device.
  • the processor 32 is configured to perform a dense matching based on a position and a posture of the first photographing device when the first visible light image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the second visible light image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; stitching the projection of the second visible light image on the terrain surface based on the cost function Obtaining a visible light output image corresponding to the second photographing device.
  • the processor 32 is configured to perform a dense matching based on a position and a posture of the second photographing device when the second visible light image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the second visible light image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; stitching the projection of the second visible light image on the terrain surface based on the cost function Obtaining a visible light output image corresponding to the second photographing device.
  • the processor 32 is further configured to: project the first visible light image to the ground based on a position and a posture of the first photographing device when the first visible light image is captured. Shaped surface.
  • the processor 32 is further configured to orthographically process the projection of the second visible light image on the terrain surface.
  • the first photographing device is a wide-angle camera
  • the second photographing device is a telephoto camera.
  • the aircraft controller provided in this embodiment can be used to perform the method of the embodiment of FIG. 5, and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • Embodiments of the present invention provide an aircraft controller.
  • the aircraft controller is based on the embodiment of FIG. 13 , the communication interface 31 is configured to: acquire a visible light image obtained by a first photographing device mounted on an aircraft, and a near infrared image obtained by the second photographing device The first photographing device and the first photographing device are photographed simultaneously.
  • the processor 32 is configured to calculate, according to a preset image processing algorithm, a position and a posture of the first photographing device when the visible light image is captured; and the first photographing device and the second The relative positional relationship between the photographing devices is used to calculate the position and posture of the second photographing device when the near-infrared image is captured.
  • the processor 32 is configured to: perform splicing processing on the projection of the visible light image on the surface of the terrain to obtain a visible light output image; and perform splicing based on the projection of the visible light image on the terrain surface
  • the stitching line is used to splicing the projection of the near-infrared image on the surface of the terrain to obtain a near-infrared output image.
  • the processor 13 is further configured to: calculate a vegetation coverage index NDVI and/or a strong vegetation index EVI based on the visible light output image and the near-infrared output image, and calculate the obtained NDVI and/or EVI, draw the corresponding index map.
  • the processor 32 is further configured to: analyze the growth status of the vegetation based on the index map, and output the analysis result.
  • the processor 32 is configured to perform a dense matching based on a position and a posture of the first photographing device when capturing the visible light image, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the infrared image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; splicing the projection of the near-infrared image on the surface of the terrain based on the cost function to obtain a near-infrared Output image.
  • the processor 32 is configured to perform a dense matching based on a position and a posture of the second photographing device when the near-infrared image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a near-infrared image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; stitching the projection of the near-infrared image on the surface of the terrain based on the cost function to obtain a near Infrared output image.
  • the first photographing device is a wide-angle camera
  • the second photographing device is a near-infrared camera.
  • the aircraft controller provided by this embodiment can be used to perform the method of the embodiment of FIG. 6, and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • Embodiments of the present invention provide an aircraft controller.
  • the aircraft controller is based on the embodiment of FIG. 13 , the communication interface 31 is configured to: acquire a visible light image captured by a first photographing device carried by the aircraft, and acquire an infrared image obtained by the second photographing device mounted on the aircraft, The first photographing device and the second photographing device are simultaneously photographed.
  • the processor 32 is configured to calculate, according to a preset image processing algorithm, a position and a posture of the first photographing device when the visible light image is captured; and the first photographing device and the second The relative positional relationship between the photographing devices is used to calculate the position and posture of the second photographing device when the infrared image is captured.
  • the processor 32 is configured to: perform splicing processing on the projection of the visible light image on the terrain surface to obtain a visible light output image; and perform splicing based on the projection of the visible light image on the terrain surface
  • the stitching line used is used to splicing the projection of the infrared image to obtain an infrared output image.
  • the processor 32 is configured to: identify a location of the heat source object in the infrared image captured by the second photographing device or the infrared output image.
  • the heat source object comprises a power line.
  • the processor 32 is configured to: according to the position and posture of the second photographing device when the infrared image is captured, and the preset power line mathematical model, model the identified power line to form a power line diagram
  • the display component 13 is configured to: superimpose and display the power line layer on the visible light output image.
  • the processor 32 is configured to: according to the first photographing device, photographing the The position and posture of the light image are densely matched to generate a corresponding dense point cloud or semi-dense point cloud; based on the projection of the infrared image on the surface of the terrain, and the point cloud generated by the dense matching, the cost is constructed a function; splicing a projection of the infrared image on the surface of the terrain based on the cost function to obtain an infrared output image.
  • the processor 32 is configured to: perform dense matching based on a position and a posture of the second photographing device when the infrared image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; and stitching the projection of the infrared image on the surface of the terrain based on the cost function to obtain an infrared output image.
  • the first photographing device is a wide-angle camera
  • the second photographing device is an infrared camera
  • the aircraft controller provided by this embodiment can be used to perform the method of the embodiment of FIG. 8 , and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • Embodiments of the present invention provide an aircraft controller.
  • 14 is a schematic structural diagram of an aircraft controller according to an embodiment of the present invention.
  • the aircraft controller 40 includes: a communication interface 41, one or more processors 42; and the one or more processors 42 are separate Or cooperatively working, the communication interface 41 is connected to the processor 42; the communication interface 41 is configured to: acquire a first image captured by a first photographing device mounted on the aircraft, and acquire a second photographing device mounted on the aircraft.
  • the processor 42 is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured, and when the second photographing device captures the second image Position and posture; the processor 42 is configured to: generate a terrain surface based on the position and posture of the first image and the first photographing device when the first image is captured. The processor 42 is configured to: perform projection and splicing processing on the second image on the terrain surface based on a position and a posture of the second photographing device when the second image is captured, to obtain an output image. .
  • the first photographing device and the second photographing device are performed at the same photographing interval in the horizontal direction. Shooting.
  • the shooting interval of the first photographing device and the second photographing device changes.
  • the first photographing device and the second photographing device are photographed in a horizontal direction at time-lapse photographing intervals, wherein the photographing interval is in advance.
  • the aircraft controller provided in this embodiment can be used to perform the method of the embodiment of FIG. 9 , and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • the embodiment of the present invention provides a computer readable storage medium, including instructions, when executed on a computer, causing a computer to execute the output image generating method provided by the foregoing embodiment.
  • Embodiments of the present invention provide a drone.
  • the drone includes a fuselage; a fuselage; a power system mounted on the body for providing flight power; and a first photographing device and a second photographing device mounted on the body for capturing images, wherein the drone;
  • the FOV of the first photographing device is greater than or equal to a preset threshold; and the aircraft controller as described in the above embodiments.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Abstract

Provided by embodiments of the present invention are an output image generation method, device and unmanned aerial vehicle, the method comprising: obtaining a first image captured by a first capturing device that is carried by an aircraft and has a field of view (FOV) greater than or equal to a preset threshold and a second image captured by a second capturing device carried by the aircraft, and on the basis of a preset algorithm, calculating a position and orientation of the first capturing device when the first image is captured and a position and orientation of the second capturing device when the second image is captured; generating a terrain surface on the basis of the first image and the position and orientation of the first capturing device when the first image is captured; and projecting and splicing the second image on the terrain surface on the basis of the position and orientation of the second capturing device when the second image is captured to obtain an output image. The method, device and unmanned aerial vehicle provided by embodiments of the present invention may improve the accuracy of the output image.

Description

输出影像生成方法、设备及无人机Output image generation method, device and drone 技术领域Technical field
本申请涉及无人机应用技术领域,尤其涉及一种输出影像生成方法、设备及无人机。The present application relates to the field of UAV application technologies, and in particular, to an output image generation method, device, and drone.
背景技术Background technique
数字正射影像(Digital Orthophoto Map,简称DOM)是利用数字高程模型对扫描处理的数字化的航空像片/遥感影像(单色/彩色),经逐个象元进行投影差改正,再按影像镶嵌,根据图幅范围进行拼接生成的影像。该影像由于使用了真实的地形表面为拼接投影面,因此具备真实的地理坐标信息,可以在该影像上度量真实的距离。Digital Orthophoto Map (DOM) is a digital aerial image/remote sensing image (monochrome/color) that is scanned and processed by digital elevation model. The projection difference is corrected by pixel, and then image mosaic. The image generated by stitching according to the range of the frame. Since the image uses a real terrain surface as a mosaic projection surface, it has real geographic coordinate information, and the true distance can be measured on the image.
在数字正射影像的制作中为了获取较为正射的航拍影像通常采用较高的飞行高度,为了使影像在采用较高的飞行高度的同时保证较高的地面分辨率,需要采用焦距较长的相机(小视场角),但根据摄影测量前方交会的精度与交会角相关(在一定范围内交会角越大精度越高)的原理,利用长焦相机交会得到的物方点的交会角较小几何精度较低,尤其在高程方向,高程方向的不准确会导致最终的数字正射影像的精度降低,并且要保证长焦相机采集的影像满足较高的重叠率需要拍摄大量的影像,加大了计算成本。In the production of digital orthophotos, a higher flying height is usually used in order to obtain a more orthographic aerial image. In order to ensure a higher ground resolution while using a higher flying height, a longer focal length is required. Camera (small angle of view), but according to the principle that the accuracy of the front intersection is related to the intersection angle (the greater the intersection angle is, the higher the accuracy is in a certain range), the intersection angle of the object point obtained by the telephoto camera intersection is smaller. The geometric accuracy is low, especially in the elevation direction. The inaccuracy in the elevation direction will result in the accuracy of the final digital orthophoto image, and it is necessary to ensure that the image captured by the telephoto camera meets a high overlap rate and needs to shoot a large number of images. Calculated the cost.
发明内容Summary of the invention
本发明实施例提供一种输出影像生成方法、设备及无人,以提高输出影像的准度和精度。The embodiment of the invention provides an output image generation method, device and an unmanned person to improve the accuracy and precision of the output image.
本发明实施例的第一方面是提供一种输出影像生成方法,包括:A first aspect of the present invention provides a method for generating an output image, including:
获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的视场角FOV大于或等于预设阈值;Obtaining a first image obtained by the first photographing device carried by the aircraft, and acquiring a second image obtained by the second photographing device mounted on the aircraft, wherein the first photographing device has a field of view angle FOV greater than or equal to a preset threshold ;
基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态; Calculating a position and a posture of the first photographing device when the first image is captured and a position and a posture of the second photographing device when photographing the second image, based on a preset algorithm;
基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;Generating a terrain surface based on the position and posture of the first image and the first photographing device when the first image is captured;
基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。And based on the position and posture of the second photographing device when the second image is captured, the second image is projected and stitched on the terrain surface to obtain an output image.
本发明实施例的第二方面是提供一种输出影像生成方法,包括:A second aspect of the present invention provides a method for generating an output image, including:
获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值,其中,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔与所述飞行器相对于地面的飞行高度关联;Acquiring a first image obtained by the first photographing device carried by the aircraft, and obtaining a second image obtained by the second photographing device mounted on the aircraft, wherein the FOV of the first photographing device is greater than or equal to a preset threshold, wherein a photographing interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft with respect to the ground;
基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态;Calculating a position and a posture of the first photographing device when the first image is captured and a position and a posture of the second photographing device when photographing the second image, based on a preset algorithm;
基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;Generating a terrain surface based on the position and posture of the first image and the first photographing device when the first image is captured;
基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。And based on the position and posture of the second photographing device when the second image is captured, the second image is projected and stitched on the terrain surface to obtain an output image.
本发明实施例的第三方面是提供一种地面站,包括:A third aspect of the embodiments of the present invention provides a ground station, including:
通信接口、一个或多个处理器;所述一个或多个处理器单独或协同工作,所述通信接口和所述处理器连接;a communication interface, one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
所述通信接口用于:获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值;The communication interface is configured to: acquire a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where the FOV of the first photographing device is greater than or Equal to the preset threshold;
所述处理器用于:基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态;The processor is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured, and a position and a posture of the second photographing device when the second image is photographed;
所述处理器用于:基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;The processor is configured to generate a terrain surface based on a position and a posture of the first image and the first photographing device when the first image is captured;
所述处理器还用于:基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。The processor is further configured to: perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
本发明实施例的第四方面是提供一种地面站,包括: A fourth aspect of the embodiments of the present invention provides a ground station, including:
通信接口、一个或多个处理器;所述一个或多个处理器单独或协同工作,所述通信接口和所述处理器连接;a communication interface, one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
所述通信接口用于:获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值,其中,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔与所述飞行器相对于地面的飞行高度关联;The communication interface is configured to: acquire a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where the FOV of the first photographing device is greater than or Is equal to a preset threshold, wherein a photographing interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft with respect to the ground;
所述处理器用于:基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态;The processor is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured, and a position and a posture of the second photographing device when the second image is photographed;
所述处理器用于:基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;The processor is configured to generate a terrain surface based on a position and a posture of the first image and the first photographing device when the first image is captured;
所述处理器用于:基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。The processor is configured to: perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
本发明实施例的第五方面是提供一种飞行器控制器,包括:A fifth aspect of the embodiments of the present invention provides an aircraft controller, including:
通信接口、一个或多个处理器;所述一个或多个处理器单独或协同工作,所述通信接口和所述处理器连接;a communication interface, one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
所述通信接口用于:获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值;The communication interface is configured to: acquire a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where the FOV of the first photographing device is greater than or Equal to the preset threshold;
所述处理器用于:基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态;The processor is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured, and a position and a posture of the second photographing device when the second image is photographed;
所述处理器用于:基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;The processor is configured to generate a terrain surface based on a position and a posture of the first image and the first photographing device when the first image is captured;
所述处理器还用于:基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。The processor is further configured to: perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
本发明实施例的第六方面是提供一种飞行器控制器,包括: A sixth aspect of the embodiments of the present invention provides an aircraft controller, including:
通信接口、一个或多个处理器;所述一个或多个处理器单独或协同工作,所述通信接口和所述处理器连接;a communication interface, one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
所述通信接口用于:获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值,其中,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔与所述飞行器相对于地面的飞行高度关联;The communication interface is configured to: acquire a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where the FOV of the first photographing device is greater than or Is equal to a preset threshold, wherein a photographing interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft with respect to the ground;
所述处理器用于:基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态;The processor is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured, and a position and a posture of the second photographing device when the second image is photographed;
所述处理器用于:基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;The processor is configured to generate a terrain surface based on a position and a posture of the first image and the first photographing device when the first image is captured;
所述处理器用于:基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。The processor is configured to: perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
本发明实施例的第七方面是提供一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行如上述第一方面或第二方面所述的输出影像生成方法A seventh aspect of the embodiments of the present invention provides a computer readable storage medium, comprising instructions, when executed on a computer, causing a computer to execute the output image generating method according to the first aspect or the second aspect described above
本发明实施例的第八方面是提供一种无人机,包括:An eighth aspect of the embodiments of the present invention provides a drone, including:
机身;body;
动力系统,安装在所述机身,用于提供飞行动力;a power system mounted to the fuselage for providing flight power;
第一拍摄设备和第二拍摄设备,安装在所述机身,用于拍摄影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值;a first photographing device and a second photographing device are mounted on the body for capturing an image, wherein an FOV of the first photographing device is greater than or equal to a preset threshold;
以及如上所述的飞行器控制器。And the aircraft controller as described above.
本发明实施例,通过获取飞行器搭载的FOV大于或等于预设阈值的第一拍摄设备拍摄获得的第一影像,以及飞行器搭载的第二拍摄设备拍摄获得的第二影像,并基于预设算法,计算第一拍摄设备在拍摄第一影像时的位置和姿态和第二拍摄设备在拍摄第二影像时的位置和姿态;基于第一影像和第一拍摄设备在拍摄第一影像时的位置和姿态,生成地形表面;基于第二拍摄设备在拍摄所述第二影像时的位置和姿态,在该地形表面上对第二影像进行投影和拼接处理,获得输出影像。由于本发明实施例中第一拍摄设备的FOV大 于或等于预设阈值,而FOV越大,基于第一拍摄设备拍摄的影像拟合获得的高程面(即地形表面)的精度就越高,从而将飞行器上其他拍摄设备拍摄的影像投影到该高程面上就能得到相应的精度较高的正射影像,提高了生成正射影像的精度。In the embodiment of the present invention, the first image obtained by the first photographing device with the FOV of the aircraft being greater than or equal to the preset threshold is acquired, and the second image obtained by the second photographing device carried by the aircraft is obtained, and based on a preset algorithm, Calculating a position and a posture of the first photographing device when the first image is captured and a position and a posture of the second photographing device when the second image is photographed; and determining a position and a posture when the first image is taken based on the first image and the first photographing device Generating a terrain surface; and based on the position and posture of the second photographing device when the second image is captured, the second image is projected and stitched on the surface of the terrain to obtain an output image. Due to the large FOV of the first photographing device in the embodiment of the present invention Or equal to a preset threshold, and the larger the FOV, the higher the accuracy of the elevation surface (ie, the terrain surface) obtained based on the image fitting captured by the first photographing device, thereby projecting images taken by other photographing devices on the aircraft to the The corresponding high-precision orthophotos can be obtained on the elevation surface, which improves the accuracy of generating orthophotos.
附图说明DRAWINGS
图1为本发明提供的一种输出影像生成方法的流程图;1 is a flowchart of a method for generating an output image according to the present invention;
图2为本发明实施例提供的地面站与飞行器的连接示意图;2 is a schematic diagram of a connection between a ground station and an aircraft according to an embodiment of the present invention;
图3a和图3b是本发明提供的两个相同场景的输出影像示意图;3a and 3b are schematic diagrams showing output images of two identical scenes provided by the present invention;
图4a和图4b为本发明实施例提供的两个相同场景下的输出影像示意图;4a and 4b are schematic diagrams of output images in two identical scenarios according to an embodiment of the present invention;
图5为本发明实施例提供的一种输出影像生成方法流程图;FIG. 5 is a flowchart of a method for generating an output image according to an embodiment of the present invention;
图6为本发明实施例提供的一种输出影像生成方法流程图;FIG. 6 is a flowchart of a method for generating an output image according to an embodiment of the present invention;
图7a为近红外相机拍摄获得的未拼接的近红外影像;Figure 7a is an unspliced near-infrared image obtained by a near-infrared camera;
图7b为图7a所示的近红外影像经过正射拼接后获得的近红外输出影像;Figure 7b is a near-infrared output image obtained after the near-infrared image shown in Figure 7a is obtained by ortho-splicing;
图7c为与图7a近红外相机同步拍摄的可见光相机对应的可见光输出影像;Figure 7c is a visible light output image corresponding to a visible light camera photographed synchronously with the near infrared camera of Figure 7a;
图7d为利用近红外影像和可见光输出影像的红色波段影像计算得到的NVDI指数图;7d is an NVDI index map calculated by using a near-infrared image and a red band image of a visible light output image;
图7e为为经过正射拼接后的近红外影像采用绿色进行伪彩色渲染的结果示意图;7e is a schematic diagram showing the result of pseudo color rendering using green for the near-infrared image after ortho-splicing;
图8为本发明实施例提供的一种输出影像生成方法流程图;FIG. 8 is a flowchart of a method for generating an output image according to an embodiment of the present invention;
图9为本发明实施例提供的一种输出影像生成方法流程图;FIG. 9 is a flowchart of a method for generating an output image according to an embodiment of the present invention;
图10a-图10b为本发明实施例提供的两种拍摄间隔示意图;10a-10b are schematic diagrams of two shooting intervals provided by an embodiment of the present invention;
图11为本发明实施例提供的地面站的结构示意图;11 is a schematic structural diagram of a ground station according to an embodiment of the present invention;
图12为本发明实施例提供的地面站的结构示意图;12 is a schematic structural diagram of a ground station according to an embodiment of the present invention;
图13为本发明实施例提供的飞行器控制器的结构示意图;FIG. 13 is a schematic structural diagram of an aircraft controller according to an embodiment of the present invention;
图14为本发明实施例提供的飞行器控制器的结构示意图。 FIG. 14 is a schematic structural diagram of an aircraft controller according to an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly described with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。It should be noted that when a component is referred to as being "fixed" to another component, it can be directly on the other component or the component can be present. When a component is considered to "connect" another component, it can be directly connected to another component or possibly a central component.
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。All technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, unless otherwise defined. The terminology used in the description of the present invention is for the purpose of describing particular embodiments and is not intended to limit the invention. The term "and/or" used herein includes any and all combinations of one or more of the associated listed items.
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。Some embodiments of the present invention are described in detail below with reference to the accompanying drawings. The features of the embodiments and examples described below can be combined with each other without conflict.
本发明实施例提供一种输出影像生成方法,该方法可以由一种地面站或搭载在无人机上的控制器来执行。以下实施例是以地面站为例所做的具体说明,控制器的执行方式与地面站类似,本实施例不做赘述。参见图1,图1为本发明提供的一种输出影像生成方法的流程图,如图1所示,本实施例中的方法,包括:Embodiments of the present invention provide an output image generation method, which may be performed by a ground station or a controller mounted on a drone. The following embodiment is a detailed description of the ground station. The implementation manner of the controller is similar to that of the ground station, and is not described in this embodiment. Referring to FIG. 1 , FIG. 1 is a flowchart of a method for generating an output image according to the present invention. As shown in FIG. 1 , the method in this embodiment includes:
步骤101、获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的视场角(FOV)大于或等于预设阈值。Step 101: Obtain a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where a field of view (FOV) of the first photographing device is greater than Or equal to the preset threshold.
本实施例中地面站是一种具有计算功能和/或处理能力的设备,该设备具体可以是遥控器、智能手机、平板电脑、膝上型电脑、手表、手环等及其组合。The ground station in this embodiment is a device having a computing function and/or processing capability, and the device may specifically be a remote controller, a smart phone, a tablet computer, a laptop computer, a watch, a wristband, and the like, and combinations thereof.
本实施例中的飞行器具体可以是搭载有拍摄设备的无人机、直升机、载人固定翼飞行器、热气球等。 The aircraft in this embodiment may specifically be a drone equipped with a photographing device, a helicopter, a manned fixed-wing aircraft, a hot air balloon, or the like.
本实施例中第一拍摄设备可以是FOV大于或等于预设阈值的拍摄设备(比如,FOV大于或等于预设阈值的广角相机)。其中,预设阈值的大小可以根据需求进行设定,本实施例中不做限定。可选的,当第一拍摄设备为广角相机时拍摄可见光影像。In this embodiment, the first photographing device may be a photographing device whose FOV is greater than or equal to a preset threshold (for example, a wide-angle camera with an FOV greater than or equal to a preset threshold). The size of the preset threshold may be set according to requirements, which is not limited in this embodiment. Optionally, the visible light image is taken when the first photographing device is a wide-angle camera.
可选的,第二拍摄设备的FOV小于第一拍摄设备的FOV,比如第二拍摄设备具体可以是FOV小于上述预设阈值的拍摄设备(比如,FOV小于预设阈值的长焦相机),可选的,第二拍摄设备还可以是近红外相机或红外相机,当第二拍摄设备为近红外相机或红外相机时,其FOV可以大于、小于或者等于第一拍摄设备的FOV。可选的,当第二拍摄设备为近红外相机时,第二拍摄设备拍摄近红外影像,当第二拍摄设备为红外相机时拍摄红外影像,当第二拍摄设备为长焦相机时拍摄可见光影像。Optionally, the FOV of the second photographing device is smaller than the FOV of the first photographing device. For example, the second photographing device may be a photographing device with a FOV smaller than the preset threshold (for example, a telephoto camera with a FOV smaller than a preset threshold). Alternatively, the second photographing device may also be a near-infrared camera or an infrared camera, and when the second photographing device is a near-infrared camera or an infrared camera, its FOV may be greater than, less than, or equal to the FOV of the first photographing device. Optionally, when the second photographing device is a near-infrared camera, the second photographing device captures the near-infrared image, when the second photographing device is an infrared camera, the infrared image is captured, and when the second photographing device is the telephoto camera, the visible image is captured. .
如图2所示,地面站21和飞行器22可以通过应用程序编程接口(Application Programming Interface,简称API)23连接,但是并不局限于通过API来进行连接。具体的,地面站21和飞行器22可以通过有线或者无线的方式连接,例如,通过如下方式连接,包括:无线保真(WIreless-Fidelity,简称WI-FI)、蓝牙、软件无线电(software defined radio,简称SDR)或者其他自定义协议。As shown in FIG. 2, the ground station 21 and the aircraft 22 can be connected through an Application Programming Interface (API) 23, but are not limited to being connected through an API. Specifically, the ground station 21 and the aircraft 22 can be connected by wire or wirelessly, for example, by connecting: WIreless-Fidelity (WI-FI), Bluetooth, software defined radio (software defined radio, Referred to as SDR) or other custom protocols.
可选的,本实施例中飞行器可以按照预定的航线进行自动巡航和拍摄,也可以在地面站的控制下进行巡航和拍摄。Optionally, in this embodiment, the aircraft can perform automatic cruising and photographing according to a predetermined route, and can also perform cruising and photographing under the control of the ground station.
可选的,本实施例中第一拍摄设备和第二拍摄设备可以按照预设的固定拍摄间隔(拍摄时间或拍摄距离)进行拍摄,也可以根据预设策略,基于飞行器与地表的相对飞行高度采用相适应的拍摄间隔进行拍摄,比如当飞行器距离地表的飞行高度较高时,采用相对较大的拍摄间隔进行拍摄,当飞行器距离地表的飞行高度较低时,采用相对较小的拍摄间隔进行拍摄。从而确保相邻时刻拍摄的影像之间满足预设的影像重合率。当然这里仅为示例说明,实际场景中并不局限于通过上述方式来确保影像重合率。Optionally, in this embodiment, the first photographing device and the second photographing device may perform shooting according to a preset fixed shooting interval (photographing time or shooting distance), or may be based on a relative flying height between the aircraft and the surface according to a preset strategy. Shooting at a suitable shooting interval, such as when the flying height of the aircraft from the surface is high, using a relatively large shooting interval, when the flying height of the aircraft from the surface is low, using a relatively small shooting interval Shooting. Thereby ensuring that the preset image coincidence ratio is satisfied between the images taken at the adjacent time. Of course, this is only an example, and the actual scene is not limited to ensuring the image reclosing ratio by the above method.
可选的,本实施例中地面站可以通过如下几种可能的方式获得第一拍摄设备和第二拍摄设备拍摄获得的影像:Optionally, in the embodiment, the ground station can obtain images obtained by the first photographing device and the second photographing device by using the following possible manners:
在一种可能的方式中,飞行器通过其与地面站之间的API将第一拍 摄设备和第二拍摄设备拍摄获得的影像实时的发送给地面站。In one possible way, the aircraft will make the first shot through its API with the ground station. The images obtained by the camera and the second shooting device are sent to the ground station in real time.
在另一种可能的方式中,飞行器按照预设的时间间隔将第一拍摄设备和第二拍摄设备在预设时间间隔内拍摄获得的影像发送给地面站。In another possible manner, the aircraft transmits the images obtained by the first photographing device and the second photographing device in a preset time interval to the ground station according to a preset time interval.
在又一种可能的方式中,飞行器在巡航结束后,将第一拍摄设备和第二拍摄设备在整个巡航过程中拍摄获得的影像集中发送给地面站。In yet another possible manner, after the cruise is over, the aircraft collectively transmits the images obtained by the first photographing device and the second photographing device during the entire cruise to the ground station.
具体的,基于以上方式,飞行器可以将第一拍摄设备和第二拍摄设备拍摄的影像以码流数据的形式发送给地面站,也可以以缩略图的形式发送给地面站,但是根据飞行器和地面站的计算能力,对于传回的码流数据或缩略图的分辨率没有具体限制,可以是原始影像。本实施例中以缩略图的形式为例,当影像以缩略图的形式发送给地面站时,地面站可以对接收到的缩略图进行显示,以使用户能够清楚的看到实时拍摄获得的影像。Specifically, based on the above manner, the aircraft may transmit the images captured by the first photographing device and the second photographing device to the ground station in the form of code stream data, or may be sent to the ground station in the form of thumbnails, but according to the aircraft and the ground. The computing power of the station is not specifically limited to the resolution of the returned stream data or thumbnails, and may be the original image. In this embodiment, taking the form of a thumbnail as an example, when the image is sent to the ground station in the form of a thumbnail, the ground station can display the received thumbnail so that the user can clearly see the image obtained by the real-time shooting. .
步骤102、基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态。Step 102: Calculate a position and a posture of the first photographing device when the first image is captured and a position and a posture of the second photographing device when the second image is captured, according to a preset algorithm.
可选的,在第一种可能的实现方式中,地面站可以基于第一预设图像处理算法,计算第一拍摄设备拍摄第一影像时的位置和姿态,以及第二预设图像处理算法,计算第二拍摄设备在拍摄第二影像时的位置和姿态。其中第一预设图像处理算法和第二预设处理算法可以相同也可以不同,可选的,本实施例中第一预设图像处理算法和第二预设处理算法可以是如下算法中的任意一种:空中三角测量、从运动恢复结构(SFM)的算法、即时定位与地图构建(SLAM)算法,也可以是其他算法,这里并不做限定。Optionally, in a first possible implementation, the ground station may calculate a position and a posture when the first imaging device captures the first image, and a second preset image processing algorithm, based on the first preset image processing algorithm. The position and posture of the second photographing device when the second image is taken are calculated. The first preset image processing algorithm and the second preset processing algorithm may be the same or different. Optionally, the first preset image processing algorithm and the second preset processing algorithm in the embodiment may be any of the following algorithms. One: air triangulation, motion recovery structure (SFM) algorithm, real-time location and map construction (SLAM) algorithm, or other algorithms, which are not limited here.
通过运动恢复结构(SFM)的算法计算位置和姿态时,可以通过像控点作为约束条件,计算更为准确的第一拍摄设备在拍摄第一影像时的位置和姿态。When the position and posture are calculated by the motion recovery structure (SFM) algorithm, the position and posture of the first imaging device when the first image is captured can be calculated by using the image control point as a constraint condition.
在第二种可能的方式中,第一拍摄设备和第二拍摄设备同步拍摄,地面站基于预设的图像处理算法(空中三角测量算法、SFM算法或SLAM算法等),计算第一拍摄设备在拍摄第一影像时的位置和姿态。基于预先标定的第一拍摄设备和第二拍摄设备之间的相对位置关系,计算第二拍 摄设备在拍摄第二影像时的位置和姿态。In a second possible manner, the first photographing device and the second photographing device are simultaneously photographed, and the ground station calculates the first photographing device based on a preset image processing algorithm (air triangulation algorithm, SFM algorithm or SLAM algorithm, etc.) The position and posture when the first image was taken. Calculating the second beat based on the relative positional relationship between the first photographing device and the second photographing device that are pre-calibrated The position and posture of the camera when shooting the second image.
需要说明的是,在现有技术中基于SLAM算法、空中三角测量算法或SFM算法计算获得的位置和姿态均是在拍摄场景下的相对位置和相对姿态。It should be noted that the position and posture obtained by the SLAM algorithm, the aerial triangulation algorithm or the SFM algorithm in the prior art are relative positions and relative postures in the shooting scene.
为了是上述计算获得的位置和姿态能够对应到世界坐标系下,使得影像对应的位置和姿态更具实际参考价值,可选的,本实施例中还可以将计算获得的相对位置和相对姿态转换为世界坐标系下的位置和姿态:In order to make the position and posture obtained by the above calculations correspond to the world coordinate system, the position and posture corresponding to the image are more practically referenced. Alternatively, in this embodiment, the calculated relative position and relative posture can also be converted. Position and pose for the world coordinate system:
在一种可能的实现方式中,地面站可以采用飞行器上搭载GPS测量设备,获取飞行器的GPS信息,具体的,可以由实时动态控制系统(RTK)提供GPS信息,将上述计算获得的相对位置和姿态转换为世界坐标系下的位置和姿态。In a possible implementation manner, the ground station can use the GPS measuring device on the aircraft to acquire the GPS information of the aircraft. Specifically, the GPS information can be provided by the real-time dynamic control system (RTK), and the relative position obtained by the above calculation is The pose is converted to position and pose in the world coordinate system.
在另一种可能的实现方式中,地面站基于预先设定的像控点的GPS信息,将上述计算获得的位置和姿态,转换为世界坐标系下的位置和姿态。具体的,在这种实现方式中可以通过人工的方式查找像控点在第一拍摄设备拍摄获得的第一影像中的相对位置和像控点在第二拍摄设备拍摄获得的第二影像中的相对位置,再基于像控点的相对位置和像控点的GPS信息,将上述计算获得的相对位置和姿态转换为世界坐标系下的位置和姿态。或者还可以通过图像识别的方式,先基于像控点的GPS的信息,分别从第一拍摄设备拍摄的第一影像和第二拍摄设备拍摄的第二影像中查找包括像控点的影像,以及该影像中可能存在像控点的区域,进一步的,再通过预设的机器学习模型和优化算法在上述区域中识别出像控点,从而获得像控点在第一影像和第二影像中的相对位置,进一步的,再基于像控点的相对位置和GPS信息,将第一拍摄设备和第二拍摄设备在拍摄影像时的相对位置和姿态转换为世界坐标系下的位置和姿态。上述图像识别的方式相较于人工方式,提高了输出影像的生成效率。In another possible implementation manner, the ground station converts the position and posture obtained by the above calculation into the position and posture in the world coordinate system based on the GPS information of the preset image control point. Specifically, in this implementation, the relative position of the image control point in the first image captured by the first photographing device and the second image captured by the image capture point in the second photographing device may be manually found. The relative position, based on the relative position of the image control point and the GPS information of the image control point, converts the relative position and posture obtained by the above calculation into the position and posture in the world coordinate system. Or, by means of image recognition, firstly, based on the information of the GPS of the image control point, respectively searching for the image including the image control point from the first image captured by the first photographing device and the second image captured by the second photographing device, and There may be an area of the image control point in the image. Further, the image control point is identified in the above area by a preset machine learning model and an optimization algorithm, thereby obtaining the image control point in the first image and the second image. The relative position, further, based on the relative position of the image control points and the GPS information, converts the relative positions and postures of the first photographing device and the second photographing device when the image is captured into the position and posture in the world coordinate system. The above image recognition method improves the output image generation efficiency compared to the manual method.
可选的,本实施例在确定像控点在第一影像和/或第二影像中的相对位置后,还可以对像控点在第一影像和/或第二影像中的相对位置进行显示,以提高用户体验。Optionally, after determining the relative positions of the image control points in the first image and/or the second image, the embodiment may further display the relative positions of the image control points in the first image and/or the second image. To improve the user experience.
在又一种可能的方式中,飞行器在将第一拍摄设备和第二拍摄设备 拍摄的影像发送给地面站的同时还将拍摄影像时飞行器的GPS信息发送给地面站。地面站根据影像对应的GPS信息将计算获得的相对位置和姿态转换为世界坐标下的位置和姿态In yet another possible manner, the aircraft is at the first and second shooting devices The captured image is sent to the ground station and the GPS information of the aircraft is also sent to the ground station when the image is taken. The ground station converts the calculated relative position and posture into the position and posture in the world coordinates according to the GPS information corresponding to the image.
可选的,当第一拍摄设备和第二拍摄设备同步拍摄时,还可以通过上述几种方式中任意一种,先将第一拍摄设备拍摄第一影像时的相对位置和姿态,转换为世界坐标系下的位置和姿态,进一步的,再基于预先标定的第一拍摄设备和第二拍摄设备之间的相对位置关系,将第二拍摄设备拍摄第二影像时的相对位置和姿态,转换为世界坐标系下的位置和姿态。Optionally, when the first photographing device and the second photographing device are simultaneously photographed, the relative position and posture of the first photographing device when the first image is captured may be first converted into the world by any one of the above manners. The position and posture in the coordinate system, and further, based on the relative positional relationship between the first camera and the second camera, which are pre-calibrated, convert the relative position and posture of the second camera when the second image is captured, Position and attitude in the world coordinate system.
步骤103、基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面。Step 103: Generate a terrain surface based on the position and posture of the first image and the first photographing device when the first image is captured.
示例的,本实施例中地面站根据第一拍摄设备在拍摄第一影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云,进一步的,再基于稠密匹配生成的点云拟和形成地形表面。其中,在基于稠密匹配生成的点云拟和形成地形表面的过程中,可以先将稠密匹配生成的点云划分地面点和非地面点,再从点云从提取出地面点,基于该些地面点拟合形成地形表面。当然这里仅为示例说明,而不是对本发明的唯一限定,比如,实际场景中,还可以将预先存储在地面站中的数字表面模型(DSM)作为地形表面,然后让后将第一拍摄设备拍摄的第一影像和第二拍摄设备拍摄的第二影像投影到DSM上。For example, in the embodiment, the ground station performs dense matching according to the position and posture of the first photographing device when photographing the first image, and generates a corresponding dense point cloud or a semi-dense point cloud, and further, a point generated based on the dense matching. Clouds form and form a terrain surface. In the process of forming a terrain surface based on dense point matching and forming a terrain surface, the point cloud generated by the dense matching may be first divided into ground points and non-ground points, and then the ground points are extracted from the point cloud, based on the grounds. Point fitting forms the terrain surface. Of course, this is merely an example, and is not a limitation of the present invention. For example, in a real scene, a digital surface model (DSM) pre-stored in a ground station may be used as a terrain surface, and then the first photographing device may be photographed. The first image and the second image captured by the second photographing device are projected onto the DSM.
可选的,在生成地形表面时,可以通过预先设定的像控点作为约束条件,计算更为准确的地形表面。Optionally, when generating the terrain surface, a more accurate terrain surface can be calculated by using a preset image control point as a constraint.
步骤104、基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。Step 104: Perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
本实施例中,可以分别基于第一拍摄设备在拍摄第一影像时的相对位置和姿态,以及第二拍摄设备在拍摄第二影像时的相对位置和姿态,将第一影像和第二影像投影到上述地形表面,也可以分别基于第一拍摄设备在拍摄第一影像时在世界坐标系下的位置和姿态,以及第二拍摄设备在拍摄第二影像时在世界坐标系下的位置和姿态,将第一影像和第二 影像投影到上述地形表面上。In this embodiment, the first image and the second image may be projected based on the relative position and posture of the first photographing device when the first image is captured, and the relative position and posture of the second photographing device when the second image is captured. The position of the terrain surface may be based on the position and posture of the first photographing device in the world coordinate system when the first image is captured, and the position and posture of the second photographing device in the world coordinate system when the second image is captured. Will first image and second The image is projected onto the surface of the above terrain.
可选的,本实施例中第一影像和第二影像的投影的拼接方法具体可以是如下方法中的一种:直接覆盖法、全景图像拼接方法、最终影像每个区域选择离影像中心最近的影像的方法,以及基于代价函数的拼接方法。可选的,第二影像投影还可以基于第一影像投影拼接时的拼接线进行拼接。本实施例中以基于代价函数的拼接方法为例,拼接方法如下:Optionally, the splicing method of the projection of the first image and the second image in the embodiment may be one of the following methods: a direct overlay method, a panoramic image splicing method, and each region of the final image is selected from the image center. Image methods, and stitching methods based on cost functions. Optionally, the second image projection may also be spliced based on the splicing line when the first image is projected and spliced. In this embodiment, a splicing method based on a cost function is taken as an example, and the splicing method is as follows:
在一种可能的拼接方式中,地面站可以先基于第一拍摄设备在拍摄第一影像时的位置和姿态进行稠密匹配,生成稠密点云或半稠密点云,在基于第二影像在地形表面上的投影,以及上述生成的点云,构建代价函数,基于该代价函数对第二影像的投影进行拼接,从而使得拼接线两侧的色彩差异最小。第一影像投影的拼接方法与第二影像类似,在这里不再赘述。In a possible splicing method, the ground station may first perform dense matching based on the position and posture of the first photographing device when taking the first image to generate a dense point cloud or a semi-dense point cloud, based on the second image on the terrain surface. The projection on the top, and the point cloud generated above, construct a cost function, and splicing the projection of the second image based on the cost function, so that the color difference on both sides of the splicing line is minimized. The splicing method of the first image projection is similar to the second image and will not be described here.
在另一种可能的拼接方式中,地面站可以先基于第二影像在表面上的投影,以及基于第二影像获得的点云,构建代价函数,再基于构代价函数对第二影像的投影进行拼接。In another possible splicing method, the ground station may first construct a cost function based on the projection of the second image on the surface and the point cloud obtained based on the second image, and then perform the projection of the second image based on the cost function. splice.
当然上述拼接方法仅是示例说明,而不是对本发明的唯一限定,实际上,在实际场景中可以采用其他任意的拼接方法。Of course, the above splicing method is only an example, and is not a limitation on the present invention. In fact, any other splicing method may be adopted in an actual scenario.
可选的,本实施例中地面站可以通过如下两种工作方式处理接收到的影像:Optionally, in this embodiment, the ground station can process the received image by using the following two working modes:
在一种可能的处理方式中,地面站采用即收即处理的方式对接收到的影像进行处理。也就是说,地面站在飞行器巡航拍摄时,就对接收到的影像进行处理,获得影像的半稠密点云或稠密点云。在这种处理方式下,地面站每接收到一张影像都要对处理获得的半稠密点云、稠密点云或者稀疏点云进行更新。另外需要说明的是,上述涉及的即收即处理方式并不仅是指字面含义所包括的处理方式,而是取决于地面站的处理速度,若地面站的处理速度能够支持即接收即处理,那么地面站在接收到影像后就对影像进行即时处理,若地面站的处理速度不足以支持影像的即时处理,那么,地面站就对接收到的影像进行依次处理,具体的,地面站可以按照影像的接收顺序进行处理,也可以按照影像的存储顺序进行处理,还可以按照其他自定义的处理顺序进行处理,本实施例中不做 具体限定。In one possible approach, the ground station processes the received image in a ready-to-go process. That is to say, when the ground station is in the cruising of the aircraft, the received image is processed to obtain a semi-dense point cloud or a dense point cloud of the image. In this way, the ground station updates the semi-dense point cloud, dense point cloud or sparse point cloud obtained by the processing for each image received. In addition, it should be noted that the above-mentioned processing method is not only the processing method included in the literal meaning, but depends on the processing speed of the ground station. If the processing speed of the ground station can support the reception or processing, then The ground station processes the image immediately after receiving the image. If the processing speed of the ground station is insufficient to support the instant processing of the image, the ground station sequentially processes the received image. Specifically, the ground station can follow the image. The receiving sequence is processed, and may be processed according to the storage order of the images, and may also be processed according to other customized processing sequences, and is not performed in this embodiment. Specifically limited.
在另一种可能的处理方式中,地面站在飞行器巡航拍摄时,只接收第一拍摄设备和第二拍摄设备拍摄的影像,在飞行器结束巡航拍摄时,再对接收到的影像进行集中处理。In another possible processing manner, when the ground station is in the cruising of the aircraft, only the images captured by the first photographing device and the second photographing device are received, and when the aircraft ends the cruise photographing, the received images are collectively processed.
可选的,在对影像的投影进行拼接处理时,可以首先对计算获得的点云进行全局的色彩调整或/及亮度调整,以达到明显改善影像质量的目的,进一步的,再基于调整后的投影影像,将投影的像素到拍摄设备的距离作为约束构建代价函数,基于代价函数对影像投射到地形表面上的投影进行拼接处理,使得拼接线两侧的色彩差异最小,这样就能够得到整体性较好的输出影像。Optionally, when splicing the image projection, the global color adjustment and/or brightness adjustment of the calculated point cloud may be first performed to achieve the purpose of significantly improving image quality, and further, based on the adjusted Projecting the image, constructing the cost function by using the distance from the projected pixel to the photographing device as a constraint, and stitching the projection of the image onto the surface of the terrain based on the cost function, so that the color difference on both sides of the stitching line is minimized, so that the integrity can be obtained. Better output image.
进一步的,为了避免点云中非地面点对拼接造成影响(非地面点会导致拼接错位),本实施例在构建代价函数时,还可以考虑将点云中的非地面点排除在外,使得拼接线能够自动避开非地面区域,从而得到视觉效果较好的输出影像。Further, in order to avoid the influence of the non-ground point splicing in the point cloud (the non-ground point may cause the splicing misalignment), in the embodiment, when constructing the cost function, the non-ground point in the point cloud may also be excluded, so that the splicing is performed. The line can automatically avoid non-ground areas, resulting in a better visual output image.
具体的,图3a和图3b是本发明提供的两个相同场景的输出影像示意图,其中图3a是以预估高程面作为投影表面所得到的输出影像,图3b是以点云拟合成形的地形表面作为投影表面所得到的输出影像,二者均采用代价函数的方法进行拼接。如图3a所示,在图3a中由于采用了平均高程面作为投影表面,由于高程面不能够精确的拟合地形表面,因此,图3a的输出图像产生了严重的拼接错位。在图3b中由于采用了点云拟合形成的地形表面作为投影面,能够较精确的拟合出地形表面,且采用代价函数的方法能够使得拼接线两侧的色差最小化,因此得到的输出影像没有出现明显的凭借错位现象,且整个输出影像的整体性较好。因此本发明实施例通过点云拟合地形表面,并采用代价函数的方法进行拼接处理,能够解决输出影像拼接错位的问题。Specifically, FIG. 3a and FIG. 3b are schematic diagrams of output images of two identical scenes provided by the present invention, wherein FIG. 3a is an output image obtained by using an estimated elevation surface as a projection surface, and FIG. 3b is formed by a point cloud fitting. The output image obtained by the terrain surface as the projection surface is spliced by the cost function method. As shown in Fig. 3a, in Fig. 3a, since the average elevation surface is used as the projection surface, the output image of Fig. 3a produces a severe stitching misalignment because the elevation surface cannot accurately fit the terrain surface. In Fig. 3b, because the terrain surface formed by the point cloud fitting is used as the projection surface, the terrain surface can be fitted more accurately, and the cost function can be used to minimize the chromatic aberration on both sides of the splicing line, so the output is obtained. There is no obvious dislocation phenomenon in the image, and the overall output image is better overall. Therefore, in the embodiment of the present invention, the terrain surface is fitted by the point cloud, and the cost function is used to perform the splicing processing, which can solve the problem of the output image mosaic misalignment.
可选的,为了使整个输出影像具有更好的视觉效果,本实施例在将第一影像和第二影像投射到地形表面上之后,还可以基于预设策略对地形表面上的第一影像和/或第二影像的投影进行色彩和亮度的调整。使得在后续拼接的过程中能够得到更好的拼接效果。Optionally, in order to make the entire output image have a better visual effect, after the first image and the second image are projected onto the terrain surface, the first image and the first image on the surface of the terrain may also be based on a preset strategy. / or projection of the second image for color and brightness adjustment. This enables a better stitching effect in the subsequent stitching process.
示例的,图4a和图4b为本发明实施例提供的两个相同场景下的输出 影像示意图,在图4a中地形表面上的投影没有经过色彩和亮度的处理,因此,在图4a中整个输出影像在色彩和亮度方面的整体性不是很好,视觉效果较差,而在图4b中由于在进行拼接之前对地形表面上的投影进行了亮度和色彩的整体性处理,因此得到的输出图像在色彩和亮度方面的整体性较好,视觉效果较好。因此,本发明实施例通过在拼接处理之前对地形表面上的投影进行色彩和亮度的整体性处理,能够有效提高输出影像的视觉效果。For example, FIG. 4a and FIG. 4b are outputs of two identical scenarios according to an embodiment of the present invention. In the image diagram, the projection on the terrain surface in Figure 4a is not processed by color and brightness. Therefore, the overall output image in Figure 4a is not very good in color and brightness, and the visual effect is poor, but in Figure 4b Because the brightness and color of the projection on the surface of the terrain are processed before the splicing, the obtained output image has better overall color and brightness, and the visual effect is better. Therefore, the embodiment of the present invention can effectively improve the visual effect of the output image by performing color and brightness processing on the projection on the terrain surface before the splicing process.
可选的,本实施例中涉及的输出影像可以被具体为正射影像,比如正射地图或其他根据正射投影拼接获得的具备真实地理坐标信息的影像。Optionally, the output image involved in this embodiment may be specifically an orthophoto, such as an orthophoto map or other image with real geographic coordinate information obtained according to orthographic projection.
本实施例,通过获取飞行器搭载的FOV大于或等于预设阈值的第一拍摄设备拍摄获得的第一影像,以及飞行器搭载的第二拍摄设备拍摄获得的第二影像,并基于预设算法,计算第一拍摄设备在拍摄第一影像时的位置和姿态和第二拍摄设备在拍摄第二影像时的位置和姿态;基于第一影像和第一拍摄设备在拍摄第一影像时的位置和姿态,生成地形表面;基于第二拍摄设备在拍摄所述第二影像时的位置和姿态,在该地形表面上对第二影像进行投影和拼接处理,获得输出影像。由于本实施例中第一拍摄设备的FOV大于或等于预设阈值,而FOV越大,基于第一拍摄设备拍摄的影像拟合获得的高程面(即地形表面)的精度就越高,从而将飞行器上其他拍摄设备拍摄的影像投影到该高程面上就能得到相应的精度较高的正射影像,提高了生成正射影像的精度。In this embodiment, the first image obtained by the first photographing device with the FOV of the aircraft being greater than or equal to the preset threshold is acquired, and the second image obtained by the second photographing device carried by the aircraft is obtained, and is calculated based on a preset algorithm. a position and a posture of the first photographing device when the first image is captured and a position and a posture of the second photographing device when the second image is captured; based on the position and posture of the first image and the first photographing device when the first image is photographed, Generating a terrain surface; and based on the position and posture of the second photographing device when the second image is captured, the second image is projected and stitched on the surface of the terrain to obtain an output image. Since the FOV of the first photographing device is greater than or equal to the preset threshold in the embodiment, and the FOV is larger, the accuracy of the elevation surface (ie, the terrain surface) obtained by the image fitting based on the image captured by the first photographing device is higher, thereby The images captured by other shooting devices on the aircraft are projected onto the elevation surface to obtain a correspondingly accurate orthophoto image, which improves the accuracy of generating orthophotos.
图5为本发明实施例提供的一种输出影像生成方法流程图,如图5所示,在图1实施例的基础上,该方法包括:FIG. 5 is a flowchart of a method for generating an output image according to an embodiment of the present invention. As shown in FIG. 5, based on the embodiment of FIG. 1, the method includes:
步骤501、获取飞行器搭载的第一拍摄设备拍摄获得的第一可见光影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二可见光影像,所述第一拍摄设备和所述第二拍摄设备同步拍摄。Step 501: Acquire a first visible light image obtained by the first photographing device mounted on the aircraft, and acquire a second visible light image obtained by the second photographing device mounted on the aircraft, where the first photographing device and the second photographing device are synchronized. Shooting.
本实施例中第一拍摄设备可以被具体为广角相机,第二拍摄设备被具体为长焦相机。In this embodiment, the first photographing device may be specifically a wide-angle camera, and the second photographing device is specifically a telephoto camera.
步骤502、基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态。 Step 502: Calculate a position and a posture of the first photographing device when the first visible light image is captured based on a preset image processing algorithm.
步骤503、基于预先标定的所述第一拍摄设备和所述第二拍摄设备的相对位置关系,计算所述第二拍摄设备在拍摄所述第二可见光影像时的位置和姿态。Step 503: Calculate a position and a posture of the second photographing device when the second visible light image is captured, based on a relative positional relationship between the first photographing device and the second photographing device that are pre-calibrated.
步骤504、基于所述第一可见光影像和所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态,生成地形表面。Step 504: Generate a terrain surface based on the position and posture of the first visible light image and the first photographing device when the first visible light image is captured.
步骤505、基于所述第二拍摄设备在拍摄所述第二可见光影像时的位置和姿态,在所述地形表面上对所述第二可见光影像进行投影和拼接处理,获得第二拍摄设备对应的可见光输出影像。Step 505: Perform projection and splicing processing on the second visible light image on the terrain surface based on the position and posture of the second photographing device when the second visible light image is captured, to obtain a corresponding corresponding to the second photographing device. Visible light output image.
可选的,本实施例中,在地形表面上对第二可见光影像的投影进行拼接的方法包括:Optionally, in this embodiment, the method for splicing the projection of the second visible light image on the surface of the terrain includes:
在一种可能的实现方式中,可以基于对第一可见光影像在地形表面上的投影进行拼接时采用的拼接线,对第二可见光影像在地形表面上的投影进行拼接,获得第二拍摄设备对应的可见光输出影像。In a possible implementation manner, the projection of the second visible light image on the terrain surface may be spliced based on the splicing line used for splicing the projection of the first visible light image on the terrain surface, and the second shooting device is obtained. Visible light output image.
在另一种可能的实现方式中,可以先基于第一拍摄设备在拍摄第一可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;再基于第二可见光影像在地形表面上的投影,以及上述通过稠密匹配生成的点云,构建代价函数;并基于该代价函数对第二可见光影像在地形表面上的投影进行拼接,获得第二拍摄设备对应的可见光输出影像。In another possible implementation manner, a dense matching may be performed based on a position and a posture of the first photographing device when the first visible light image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; and then based on the second visible light image. Projecting on the surface of the terrain, and the point cloud generated by the dense matching, constructing a cost function; and splicing the projection of the second visible image on the terrain surface based on the cost function to obtain a visible light output image corresponding to the second imaging device .
在另一种可能的实现方式中,可以先基于第二拍摄设备在拍摄第二可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;再基于第二可见光影像在地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数,基于代价函数对第二可见光影像在所述地形表面上的投影进行拼接,获得所述第二拍摄设备对应的可见光输出影像。In another possible implementation manner, a dense matching may be performed based on a position and a posture of the second photographing device when the second visible light image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; and then based on the second visible light image Projecting on the surface of the terrain, and the point cloud generated by the dense matching, constructing a cost function, splicing the projection of the second visible image on the surface of the terrain based on the cost function, and obtaining visible light corresponding to the second photographing device Output image.
可选的,本实施例中,还可以基于第一拍摄设备在拍摄第一可见光影像时的位置和姿态,将第一可见光影像投影到地形表面,并基于第二可见光影像在地形表面上的投影对第一可见光影像在第一地形表面上的投影进行图像纠正处理,从而基于图像纠正处理后的投影,获得第一拍摄设备对应的可见光输出影像,并基于该可见光输出影像上的拼接线, 对第二影像在地形表面上的投影进行拼接。Optionally, in this embodiment, the first visible light image is projected onto the terrain surface based on the position and posture of the first imaging device when the first visible light image is captured, and the projection based on the second visible light image on the terrain surface is further Performing image correction processing on the projection of the first visible light image on the first terrain surface, thereby obtaining a visible light output image corresponding to the first photographing device based on the image corrected processing projection, and based on the stitching line on the visible light output image, Splicing the projection of the second image on the surface of the terrain.
可选的,本实施例还可以通过对第二可见光影像在地形表面上的投影进行正射处理来获得正射的可见光输出影像。Optionally, in this embodiment, the orthographic visible light output image may be obtained by orthographic processing the projection of the second visible light image on the terrain surface.
由于本实施例中,第一拍摄设备被具体为广角相机,第二拍摄设备被具体为长焦相机,第一拍摄设备的FOV较大,第二拍摄设备的FOV较小,因此,基于第一拍摄设备在拍摄第一可见光影像时的位置和姿态,获得的高程面(即地形表面)的精度和准确度要高于基于第二拍摄设备在拍摄第二可见光影像时的位置和姿态获得的高程面,从而将第二拍摄设备拍摄获得的第二可见光影像投影到该高程面后能够得到准确度和精度较高的正射影像,提高了基于第二拍摄设备获得的正射影像的精度。In this embodiment, the first photographing device is specifically a wide-angle camera, and the second photographing device is specifically a telephoto camera, the FOV of the first photographing device is larger, and the FOV of the second photographing device is smaller, therefore, based on the first The position and posture of the shooting device when capturing the first visible light image, the accuracy and accuracy of the obtained elevation surface (ie, the terrain surface) is higher than the elevation obtained based on the position and posture of the second imaging device when capturing the second visible light image. Therefore, the second visible light image obtained by the second photographing device is projected onto the elevation surface to obtain an orthophoto image with high accuracy and precision, and the accuracy of the orthophoto image obtained by the second photographing device is improved.
本实施例提供的方法,其执行方式和有益效果与图1实施例类似,在这里不再赘述。The method and the beneficial effects of the method provided in this embodiment are similar to those in the embodiment of FIG. 1, and are not described herein again.
图6为本发明实施例提供的一种输出影像生成方法流程图,如图6所示,在图1实施例的基础上,该方法包括:FIG. 6 is a flowchart of a method for generating an output image according to an embodiment of the present invention. As shown in FIG. 6 , on the basis of the embodiment of FIG. 1 , the method includes:
步骤601、获取飞行器搭载的第一拍摄设备拍摄获得的可见光影像,以及所述所述第二拍摄设备拍摄获得的近红外影像,所述第一拍摄设备和所述第一拍摄设备同步拍摄。Step 601: Acquire a visible light image obtained by the first photographing device mounted on the aircraft, and a near-infrared image captured by the second photographing device, and the first photographing device and the first photographing device simultaneously photograph.
本实施例中,第一拍摄设备可以被具体为FOV大于或等于预设阈值的可见光相机(比如FOV大于或等于预设阈值的广角相机),第二拍摄设备可以被具体为近红外相机。In this embodiment, the first photographing device may be specifically a visible light camera with a FOV greater than or equal to a preset threshold (such as a wide-angle camera with an FOV greater than or equal to a preset threshold), and the second photographing device may be specifically a near-infrared camera.
步骤602、基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态。Step 602: Calculate a position and a posture of the first photographing device when the visible light image is captured based on a preset image processing algorithm.
步骤603、基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述近红外影像时的位置和姿态。Step 603: Calculate a position and a posture of the second photographing device when photographing the near-infrared image based on a relative positional relationship between the first photographing device and the second photographing device that are pre-calibrated.
步骤604、基于所述可见光影像和所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态,生成地形表面。Step 604: Generate a terrain surface based on the visible light image and the position and posture of the first photographing device when the visible light image is captured.
步骤605、基于所述第二拍摄设备在拍摄所述近红外影像时的位置和姿态,在所述地形表面上对所述近红外影像进行投影和拼接处理,获得 第二拍摄设备对应的近红外输出影像。Step 605: Perform projection and splicing processing on the near-infrared image on the terrain surface based on a position and a posture of the second photographing device when the near-infrared image is captured. The near-infrared output image corresponding to the second photographing device.
可选的,本实施例中,在地形表面上对近红外影像的投影进行拼接的方法包括:Optionally, in this embodiment, the method for splicing the projection of the near-infrared image on the surface of the terrain includes:
在一种可能的实现方式中,地面站可以对可见光影像在地形表面上的投影进行拼接处理,获得正射的可见光输出影像,并基于可见光影像在地形表面上的投影的拼接线,对近红外影像在地形表面上的投影进行拼接,获得近红外输出影像。In a possible implementation manner, the ground station can splicing the projection of the visible light image on the terrain surface to obtain an orthographic visible light output image, and based on the splicing line of the visible light image on the terrain surface, the near infrared The projection of the image on the surface of the terrain is spliced to obtain a near-infrared output image.
可选的,在获得第一拍摄设备对应的可见光输出影像和第二拍摄设备对应的近红外输出影像,本实施例还可以在地面站上对该可见光输出影像和/或近红外输出影像进行显示,以提高用户的体验。Optionally, the visible light output image corresponding to the first photographing device and the near infrared output image corresponding to the second photographing device are obtained, and the visible light output image and/or the near infrared output image may be displayed on the ground station in this embodiment. To enhance the user experience.
在另一种可能的实现方式中,可以先基于第一拍摄设备在拍摄可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;再基于第二拍摄设备拍摄的近红外影像在地形表面上的投影,以及上述通过稠密匹配生成的点云,构建代价函数;并基于该代价函数对近红外影像在地形表面上的投影进行拼接,获得第二拍摄设备对应的近红外输出影像。In another possible implementation manner, the position and posture of the first photographing device when capturing the visible light image may be densely matched to generate a corresponding dense point cloud or a semi-dense point cloud; and then photographed based on the second photographing device. The projection of the near-infrared image on the surface of the terrain and the point cloud generated by the dense matching described above construct a cost function; and based on the cost function, the projection of the near-infrared image on the terrain surface is spliced, and the corresponding corresponding to the second photographing device is obtained. Infrared output image.
在又一种可能的实现方式中,地面站基于所述第二拍摄设备在拍摄所述近红外影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云,并基于所述近红外影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数,代价函数构建的原则为拼接线尽量选择色彩差异较小的地方,避免经过建筑桥梁等人造地物,进一步的,再基于代价函数对近红外影像在所述地形表面上的投影进行拼接,获得正射的近红外输出影像。In another possible implementation manner, the ground station performs dense matching based on the position and posture of the second photographing device when capturing the near-infrared image, and generates a corresponding dense point cloud or a semi-dense point cloud, and based on the The projection of the near-infrared image on the surface of the terrain, and the point cloud generated by the dense matching, construct a cost function, and the principle of constructing the cost function is to select a place where the color difference is small as much as possible, and avoid artificial building and the like. The feature, further, is based on the cost function to splicing the projection of the near-infrared image on the surface of the terrain to obtain an orthographic near-infrared output image.
可选的,本实施例还以基于上述获得的可见光输出影像和近红外输出影像,计算植被覆盖指数(NDVI)和/或强型植被指数(EVI),并基于计算获得的NDVI和/或EVI,绘制相应的指数图,并对该指数图进行显示。Optionally, the embodiment further calculates the vegetation coverage index (NDVI) and/or the strong vegetation index (EVI) based on the visible light output image and the near-infrared output image obtained above, and obtains the NDVI and/or EVI based on the calculation. , draw the corresponding index map, and display the index map.
进一步的,在获得指数图之后,分析结果还可以基于该指数图对植被的生长状况进行分析,输出分析结果。从而实现提供植被分析数据的目的,为植被分析提供了便利。 Further, after obtaining the index map, the analysis result may also analyze the growth state of the vegetation based on the index map, and output the analysis result. The purpose of providing vegetation analysis data is thus provided, which facilitates vegetation analysis.
另外,由于一些场景下,树木和建筑均具有较高的高程,因此,在对近红外影像的点云进行分类时,容易出现将树木划分为建筑或将建筑划分为树木的情况,基于上述的问题,本实施例还可以根据计算获得的NDVI和/或EVI对近红外影像上植被和建筑进行区分,增强分类的可靠性。In addition, due to the high elevation of trees and buildings in some scenarios, when classifying point clouds of near-infrared images, it is easy to divide trees into buildings or divide buildings into trees. The problem is that the NDVI and/or EVI obtained by the calculation can distinguish the vegetation and the building on the near-infrared image, and enhance the reliability of the classification.
示例的,图7a为近红外相机拍摄获得的未拼接的近红外影像,图7b为图7a所示的近红外影像经过正射拼接后获得的近红外输出影像,图7c为与图7a近红外相机同步拍摄的可见光相机对应的可见光输出影像,图7d为利用近红外影像和可见光输出影像的红色波段影像计算得到的NVDI指数图,图7e为为经过正射拼接后的近红外影像采用绿色进行伪彩色渲染的结果示意图。如图7a-图7e所示,在图7a中单张的近红外影像覆盖的范围较小,多张未拼接的影像对整个场景的表达也不直观,而在图7b中正射拼接后的近红外输出影像,不仅能够更好的表达整个场景还具备可量测的特性。而基于图7c-图7d所得到的图7e能够通过绿色的亮度和饱和度反映植物的叶绿素的含量,使得分析结果更加直观。For example, FIG. 7a is an unspliced near-infrared image obtained by a near-infrared camera, FIG. 7b is a near-infrared output image obtained by ortho-splicing of the near-infrared image shown in FIG. 7a, and FIG. 7c is a near-infrared image with FIG. 7a. The visible light output image corresponding to the visible light camera that is synchronously photographed by the camera, FIG. 7d is an NVDI index image calculated by using the red band image of the near-infrared image and the visible light output image, and FIG. 7e is a green image of the near-infrared image after the orthophoto stitching. Schematic diagram of the results of pseudo color rendering. As shown in Figures 7a-e, in Figure 7a, the coverage of a single NIR image is small, and the multiple unspliced images are not intuitive to the entire scene, but in Figure 7b, the The infrared output image not only better expresses the entire scene but also has measurable characteristics. The Fig. 7e obtained based on Fig. 7c - Fig. 7d can reflect the chlorophyll content of the plant by the brightness and saturation of the green, so that the analysis result is more intuitive.
另外,本领域技术人员应该了解的是,虽然本实施例是将NDVI、EVI指数作物分析植被生长状况的指标,但是,实际场景中并不局限于使用NDVI和EVI,而是还可以将NDVI和/或EVI替换为其他可用于分析植被生长状态的指标,本实施例中不做具体限定。In addition, it should be understood by those skilled in the art that although the present embodiment is an indicator for analyzing the vegetation growth status of NDVI and EVI index crops, the actual scene is not limited to the use of NDVI and EVI, but also NDVI and The EVI is replaced with other indicators that can be used to analyze the state of vegetation growth, and is not specifically limited in this embodiment.
本实施例在影像采集时同时获取了可见光影像和近红外影像,根据植物对两种光谱的不同响应,计算NDVI和/或EVI指数,将NDVI和/或EVI指数作为分类植被的重要依据,提高了点云分类的可靠性。In this embodiment, the visible light image and the near-infrared image are simultaneously acquired during image acquisition, and the NDVI and/or EVI index is calculated according to different responses of the plant to the two spectra, and the NDVI and/or EVI index is used as an important basis for classifying vegetation, and the NDVI and/or EVI index are used as an important basis for classifying vegetation. The reliability of point cloud classification.
可选的,本实施例中,飞行器还可以同时搭载广角相机、长焦相机和近红外相机,其中,基于广角相机拍摄的影像对长焦相机和近红外相机拍摄的影像进行处理的方法与前述实施例类似,在这里不再赘述。Optionally, in this embodiment, the aircraft can also be equipped with a wide-angle camera, a telephoto camera, and a near-infrared camera, wherein the method for processing the image captured by the telephoto camera and the near-infrared camera based on the image captured by the wide-angle camera and the foregoing The embodiments are similar and will not be described again here.
本实施例的执行方式和有益效果与图1实施例类似,在这里不再赘述。The implementation manner and beneficial effects of the embodiment are similar to those of the embodiment of FIG. 1, and are not described herein again.
图8为本发明实施例提供的一种输出影像生成方法流程图,如图8所示,在图1实施例的基础上,该方法包括: FIG. 8 is a flowchart of a method for generating an output image according to an embodiment of the present invention. As shown in FIG. 8 , based on the embodiment of FIG. 1 , the method includes:
步骤801、获取飞行器搭载的第一拍摄设备拍摄获得的可见光影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的红外影像,所述第一拍摄设备和所述第二拍摄设备同步拍摄。Step 801: Acquire a visible light image obtained by the first photographing device mounted on the aircraft, and acquire an infrared image obtained by the second photographing device mounted on the aircraft, and the first photographing device and the second photographing device simultaneously photograph.
本实施例中,第一拍摄设备可以被具体为FOV大于或等于预设阈值的可见光相机(比如FOV大于或等于预设阈值的广角相机),第二拍摄设备可以被具体为红外相机。In this embodiment, the first photographing device may be specifically a visible light camera with a FOV greater than or equal to a preset threshold (such as a wide-angle camera with a FOV greater than or equal to a preset threshold), and the second photographing device may be specifically an infrared camera.
步骤802、基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态。Step 802: Calculate a position and a posture of the first photographing device when the visible light image is captured based on a preset image processing algorithm.
步骤803、基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述红外影像时的位置和姿态。Step 803: Calculate a position and a posture of the second photographing device when the infrared image is captured based on a relative positional relationship between the first photographing device and the second photographing device that are pre-calibrated.
步骤804、基于所述可见光影像和所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态,生成地形表面。Step 804: Generate a terrain surface based on the visible light image and the position and posture of the first photographing device when the visible light image is captured.
步骤805、基于所述第二拍摄设备在拍摄所述红外影像时的位置和姿态,在所述地形表面上对所述红外影像进行投影和拼接处理,获得第二拍摄设备对应的红外输出影像。Step 805: Perform projection and splicing processing on the infrared image on the terrain surface to obtain an infrared output image corresponding to the second photographing device, based on the position and posture of the second photographing device when the infrared image is captured.
可选的,本实施例中,在地形表面上对红外影像的投影进行拼接的方法包括:Optionally, in this embodiment, the method for splicing the projection of the infrared image on the surface of the terrain includes:
在一种可能的实现方式中,地面站对可见光影像在地形表面上的投影进行拼接处理,获得可见光输出影像,基于可见光影像在地形表面上的投影的拼接线,对红外影像的投影进行拼接,获得红外输出影像。In a possible implementation manner, the ground station splicing the projection of the visible light image on the terrain surface to obtain a visible light output image, and stitching the projection of the infrared image based on the splicing line of the projection of the visible light image on the terrain surface. Obtain an infrared output image.
在另一种可能的实现方式中,可以先基于第一拍摄设备在拍摄可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;再基于第二拍摄设备拍摄的红外影像在地形表面上的投影,以及上述通过稠密匹配生成的点云,构建代价函数;并基于该代价函数对红外影像在地形表面上的投影进行拼接,获得第二拍摄设备对应的红外输出影像。In another possible implementation manner, the position and posture of the first photographing device when capturing the visible light image may be densely matched to generate a corresponding dense point cloud or a semi-dense point cloud; and then photographed based on the second photographing device. The projection of the infrared image on the surface of the terrain and the point cloud generated by the dense matching described above construct a cost function; and based on the cost function, the projection of the infrared image on the terrain surface is spliced to obtain the infrared output image corresponding to the second imaging device. .
在又一种可能的实现方式中,地面站基于第二拍摄设备在拍摄红外影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云,并基于红外影像在所述地形表面上的投影,以及稠密匹配生成的点 云,构建代价函数,基于代价函数对红外影像在地形表面上的投影进行拼接,获得红外输出影像。In another possible implementation manner, the ground station performs dense matching based on the position and posture of the second photographing device when capturing the infrared image, and generates a corresponding dense point cloud or semi-dense point cloud, and based on the infrared image in the terrain. Projection on the surface, and points generated by dense matching The cloud constructs a cost function and splicing the projection of the infrared image on the terrain surface based on the cost function to obtain an infrared output image.
可选的,本实施例在获得上述可见光输出影像和红外输出影像之后,还可以在地面站上对该可见光输出影像和/或红外输出影像进行显示,以提高用户体验。Optionally, after obtaining the visible light output image and the infrared output image, the visible light output image and/or the infrared output image may be displayed on the ground station to improve the user experience.
可选的,由于本实施例中飞行器搭载了红外相机,因此,本实施例还可以基于红外影像的特性,从红外相机拍摄获得的红外影像或者红外相机对应的红外输出影像中识别出热源物体(比如光伏板、电力线等)的位置。尤其是电力线,由于其直径较小,在普通航拍的可见光影像上很难识别出来,然而,本实施例基于电力线发热的特性通过红外影像很容易就可以从航拍的红外影像中识别出来,达到了识别电力线的目的。Optionally, since the aircraft is equipped with an infrared camera in this embodiment, the embodiment can also identify the heat source object from the infrared image captured by the infrared camera or the infrared output image corresponding to the infrared camera based on the characteristics of the infrared image ( Such as photovoltaic panels, power lines, etc.). In particular, the power line is difficult to recognize on the visible light image of ordinary aerial photography because of its small diameter. However, the present embodiment can easily recognize the infrared image from the aerial image through the infrared image based on the characteristics of the power line heating. Identify the purpose of the power line.
进一步的,红外相机拍摄获得的红外影像或者红外相机对应的红外输出影像中识别出电力线后,本实施例可以通过第二拍摄设备在拍摄红外影像时的位置和姿态,以及预设电力线数学模型,对识别出的电力线进行建模,形成电力线图层,并将该电力线图层在上述获得的可见光输出影像上进行叠加显示。Further, after the infrared image captured by the infrared camera or the infrared output image corresponding to the infrared camera identifies the power line, the embodiment can use the position and posture of the second imaging device when capturing the infrared image, and the preset power line mathematical model. The identified power line is modeled to form a power line layer, and the power line layer is superimposed and displayed on the visible light output image obtained above.
本实施例基于电力线的温度异于周围环境的温度的特性,通过在飞行器上搭载红外相机,并基于红外相机拍摄获得的红外影像或者红外相机对应的红外输出影像识别电力线,再通过建模的方式对识别出的电力线进行建模,生成电力线图层,将电力线图层叠加到可见光输出影像上,从而实现了电力线在正射影像上的清晰显示,并可以通过测量获得电力线的具体信息。另外,还可以通过在正射影像上显示电力线的具体参数来提高正射影像的信息含量。In this embodiment, based on the characteristic that the temperature of the power line is different from the temperature of the surrounding environment, the infrared line is mounted on the aircraft, and the infrared line image obtained by the infrared camera or the infrared output image corresponding to the infrared camera is used to identify the power line, and then the modeling method is adopted. The identified power line is modeled, a power line layer is generated, and the power line map is layered on the visible light output image, thereby realizing clear display of the power line on the orthophoto, and obtaining specific information of the power line by measurement. In addition, it is also possible to increase the information content of the orthophoto by displaying the specific parameters of the power line on the orthophoto.
可选的,本实施例中,飞行器还可以同时搭载广角相机、长焦相机和红外相机,其中,基于广角相机拍摄的影像对长焦相机和红外相机拍摄的影像进行处理的方法与前述实施例类似,在这里不再赘述。Optionally, in this embodiment, the aircraft can also be equipped with a wide-angle camera, a telephoto camera, and an infrared camera at the same time, wherein the method for processing the image captured by the telephoto camera and the infrared camera based on the image captured by the wide-angle camera and the foregoing embodiment Similar, no longer repeat here.
本实施例的执行方式和有益效果与图1实施例类似,在这里不再赘述。The implementation manner and beneficial effects of the embodiment are similar to those of the embodiment of FIG. 1, and are not described herein again.
图9为本发明实施例提供的一种输出影像生成方法流程图,如图9所 示,在图1实施例的基础上,该方法包括:FIG. 9 is a flowchart of a method for generating an output image according to an embodiment of the present invention, as shown in FIG. It is shown that, based on the embodiment of Figure 1, the method comprises:
步骤901、获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值,其中,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔与所述飞行器相对于地面的飞行高度关联。Step 901: Acquire a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where the FOV of the first photographing device is greater than or equal to a preset threshold. Wherein the photographing interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft with respect to the ground.
可选的,如图10a所示,当所述飞行器相对于地表以固定的相对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以相同的拍摄间隔进行拍摄。Optionally, as shown in FIG. 10a, when the aircraft is flying at a fixed relative height with respect to the ground surface, the first photographing device and the second photographing device perform photographing at the same photographing interval in the horizontal direction.
可选的,当所述飞行器相对于地表高度改变时,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔改变。具体的,如图10b所示,当所述飞行器以统一的绝对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影像重叠率,以及所述飞行器与地表的相对高度关联。Optionally, when the aircraft changes in height relative to the surface, the shooting interval of the first photographing device and the second photographing device changes. Specifically, as shown in FIG. 10b, when the aircraft is flying at a uniform absolute height, the first photographing device and the second photographing device are photographed in a horizontal direction at time-lapse shooting intervals, wherein The photographing interval is associated with a pre-configured image overlap rate and the relative height of the aircraft to the surface.
可选的,为了保证重叠率,可以在相对于地表高度增加时相应的增大拍摄间隔,在相对于地表高度减小时相应的减小拍摄间隔。Optionally, in order to ensure the overlap ratio, the shooting interval may be correspondingly increased when the height is increased relative to the ground surface, and the shooting interval is correspondingly reduced when the height is reduced relative to the ground surface.
步骤902、基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态。Step 902: Calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured and a position and a posture of the second photographing device when the second image is photographed.
步骤903、基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面。Step 903: Generate a terrain surface based on the position and posture of the first image and the first photographing device when the first image is captured.
步骤904、基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。Step 904: Perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
本实施例提供的方法,其执行方式和有益效果与图1实施例类似在这里不再赘述。The method and the beneficial effects of the method provided by this embodiment are similar to those of the embodiment of FIG. 1 and are not described herein again.
本发明实施例提供一种地面站,该地面站可以是上述实施例所述的地面站。图11为本发明实施例提供的地面站的结构示意图,如图11所示,地面站10包括:通信接口11、一个或多个处理器12;一个或多个处理器单独或协同工作,通信接口11和处理器12连接;所述通信接口11 用于:获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值;所述处理器12用于:基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态;所述处理器12用于:基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;所述处理器12还用于:基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。The embodiment of the invention provides a ground station, which may be the ground station described in the above embodiment. 11 is a schematic structural diagram of a ground station according to an embodiment of the present invention. As shown in FIG. 11, the ground station 10 includes: a communication interface 11, one or more processors 12; and one or more processors work independently or in cooperation. The interface 11 is connected to the processor 12; the communication interface 11 And configured to: obtain a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where the FOV of the first photographing device is greater than or equal to a preset threshold. The processor 12 is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured and a position of the second photographing device when the second image is photographed And the processor 12 is configured to: generate a terrain surface based on the position and posture of the first image and the first photographing device when the first image is captured; the processor 12 is further configured to: And based on the position and posture of the second photographing device when the second image is captured, the second image is projected and stitched on the terrain surface to obtain an output image.
可选的,所述第一拍摄设备的FOV大于所述第二拍摄设备的FOV。Optionally, the FOV of the first photographing device is greater than the FOV of the second photographing device.
可选的,所述第二拍摄设备的FOV小于所述预设阈值。Optionally, the FOV of the second photographing device is less than the preset threshold.
可选的,所述处理器12用于:基于预先设定的像控点的GPS信息,将所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,以及所述第二拍摄设备在拍摄所述第二影像时的位置和姿,转换为世界坐标系下的位置和姿态。Optionally, the processor 12 is configured to: determine, according to GPS information of a preset image control point, a position and a posture of the first photographing device when the first image is captured, and the second photographing The position and posture of the device when the second image is captured are converted into a position and a posture in the world coordinate system.
可选的,所述处理器12用于:基于预先设定的像控点的GPS信息,确定所述像控点在所述第一拍摄设备拍摄获得的第一影像中的相对位置;基于所述像控点在所述第一影像中的相对位置,以及所述像控点的GPS信息,将所述第一拍摄设备在拍摄所述第一影像时的位置和姿态转换为世界坐标系下的位置和姿态;基于预先标定的第一拍摄设备和所述第二拍摄设备之间的相对位置关系,将所述第二拍摄设备在拍摄所述第二影像时的位置和姿态转换为世界坐标系下的位置和姿态。Optionally, the processor 12 is configured to: determine, according to GPS information of a preset image control point, a relative position of the image control point in the first image captured by the first photographing device; Determining a relative position of the image control point in the first image, and GPS information of the image control point, converting the position and posture of the first photographing device when the first image is captured into a world coordinate system Position and posture; converting the position and posture of the second photographing device when photographing the second image into world coordinates based on a relative positional relationship between the first photographing device and the second photographing device that are pre-calibrated Position and posture under the system.
可选的,所述地面站还包括显示组件13,显示组件13与处理器12通信连接,显示组件13用于:显示所述像控点在所述第一影像上的所述相对位置。Optionally, the ground station further includes a display component 13 communicatively coupled to the processor 12, the display component 13 configured to: display the relative position of the image control point on the first image.
可选的,所述处理器12用于:以预先设定的像控点作为约束条件,采用运动恢复结构SFM算法计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态;基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述第二影像时的位置和姿态。 Optionally, the processor 12 is configured to: calculate, by using a motion recovery structure SFM algorithm, a position and a posture of the first photographing device when the first image is captured, by using a preset image control point as a constraint condition; And calculating a position and a posture of the second photographing device when the second image is captured based on a relative positional relationship between the first photographing device and the second photographing device that is pre-calibrated.
可选的,所述处理器12用于:基于所述第一拍摄设备在拍摄所述第一影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;基于稠密匹配生成的点云拟合形成地形表面。Optionally, the processor 12 is configured to perform a dense matching based on a position and a posture of the first photographing device when the first image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; The resulting point cloud fits to form a terrain surface.
可选的,所述处理器12用于:从稠密匹配生成的点云中提取地面点;Optionally, the processor 12 is configured to: extract a ground point from a point cloud generated by the dense matching;
基于提取出的地面点拟合形成地形表面。A terrain surface is formed based on the extracted ground point fit.
可选的,所述处理器12还包括:对所述第二影像在所述表面上的投影进行全局色彩和/或亮度调整。Optionally, the processor 12 further includes: performing global color and/or brightness adjustment on a projection of the second image on the surface.
可选的,所述预设的图像处理算法包括如下任意一种:空中三角测量、从运动恢复结构SFM的算法、即时定位与地图构建SLAM算法。Optionally, the preset image processing algorithm includes any one of the following: an aerial triangulation, an algorithm for recovering a structure SFM from motion, a real-time positioning, and a map construction SLAM algorithm.
可选的,所述输出影像包括正射影像。Optionally, the output image includes an orthophoto.
可选的,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔与所述飞行器相对于地面的飞行高度关联。Optionally, the shooting interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft relative to the ground.
可选的,当所述飞行器相对于地表以固定的相对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上分别以相同的拍摄间隔进行拍摄。Optionally, when the aircraft is flying at a fixed relative height with respect to the ground surface, the first photographing device and the second photographing device respectively photograph at the same photographing interval in the horizontal direction.
可选的,当所述飞行器相对于地表高度改变时,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔改变。Optionally, when the aircraft changes in height relative to the surface, the shooting interval of the first photographing device and the second photographing device changes.
可选的,当所述飞行器以统一的绝对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影像重叠率,以及所述飞行器与地表的相对高度关联。Optionally, when the aircraft is flying at a uniform absolute height, the first photographing device and the second photographing device are photographed in a horizontal direction at time-lapse photographing intervals, wherein the photographing interval is in advance The configured image overlap rate and the relative height of the aircraft to the surface.
本实施例提供的地面站能够执行图1实施例的技术方案,其执行方式和有益效果类似,在这里不再赘述。The ground station provided in this embodiment can perform the technical solution of the embodiment of FIG. 1 , and the execution manner and the beneficial effects are similar, and details are not described herein again.
本发明实施例提供一种地面站。该地面站在图11实施例的基础上,所述通信接口11用于:获取飞行器搭载的第一拍摄设备拍摄获得的第一可见光影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二可见光影像,所述第一拍摄设备和所述第二拍摄设备同步拍摄。处理器12用于:基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述第一 可见光影像时的位置和姿态;基于预先标定的所述第一拍摄设备和所述第二拍摄设备的相对位置关系,计算所述第二拍摄设备在拍摄所述第二可见光影像时的位置和姿态。Embodiments of the present invention provide a ground station. The ground station is based on the embodiment of FIG. 11 , and the communication interface 11 is configured to: acquire a first visible light image obtained by the first imaging device mounted on the aircraft, and acquire a second captured by the second imaging device mounted on the aircraft; The visible light image, the first photographing device and the second photographing device are simultaneously photographed. The processor 12 is configured to: calculate, according to a preset image processing algorithm, that the first photographing device is photographing the first a position and a posture of the visible light image; calculating a position and a posture of the second photographing device when the second visible light image is captured based on a relative positional relationship between the first photographing device and the second photographing device .
可选的,处理器12用于:基于对所述第一可见光影像在所述地形表面上的投影进行拼接时采用的拼接线,对所述第二可见光影像在所述地形表面上的投影进行拼接,获得所述第二拍摄设备对应的可见光输出影像。Optionally, the processor 12 is configured to: perform a projection on the terrain surface of the second visible light image based on a splicing line used when splicing the projection of the first visible light image on the terrain surface Splicing, obtaining a visible light output image corresponding to the second photographing device.
可选的,所述处理器12用于:基于所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;基于所述第二可见光影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;基于所述代价函数对所述第二可见光影像在所述地形表面上的投影进行拼接,获得所述第二拍摄设备对应的可见光输出影像。Optionally, the processor 12 is configured to perform a dense matching based on a position and a posture of the first photographing device when the first visible light image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the second visible light image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; stitching the projection of the second visible light image on the terrain surface based on the cost function Obtaining a visible light output image corresponding to the second photographing device.
可选的,所述处理器12用于:基于所述第二拍摄设备在拍摄所述第二可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;基于所述第二可见光影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;基于所述代价函数对所述第二可见光影像在所述地形表面上的投影进行拼接,获得所述第二拍摄设备对应的可见光输出影像。Optionally, the processor 12 is configured to perform a dense matching based on a position and a posture of the second photographing device when the second visible light image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the second visible light image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; stitching the projection of the second visible light image on the terrain surface based on the cost function Obtaining a visible light output image corresponding to the second photographing device.
可选的,所述处理器12还用于:基于所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态,将所述第一可见光影像投影到所述地形表面。Optionally, the processor 12 is further configured to: project the first visible light image onto the terrain surface based on a position and a posture of the first photographing device when the first visible light image is captured.
可选的,所述处理器12还用于:对所述第二可见光影像在所述地形表面上的投影进行正射处理。Optionally, the processor 12 is further configured to orthographically process the projection of the second visible light image on the terrain surface.
可选的,所述第一拍摄设备为广角相机,所述第二拍摄设备为长焦相机。Optionally, the first photographing device is a wide-angle camera, and the second photographing device is a telephoto camera.
本实施例提供的地面站能够用于执行图5实施例的方法,其执行方式和有益效果类似,这里不再赘述。The ground station provided by this embodiment can be used to perform the method of the embodiment of FIG. 5, and the execution manner and the beneficial effects are similar, and details are not described herein again.
本发明实施例提供一种地面站。该地面站在图11实施例的基础上, 所述通信接口11用于:获取飞行器搭载的第一拍摄设备拍摄获得的可见光影像,以及所述所述第二拍摄设备拍摄获得的近红外影像,所述第一拍摄设备和所述第一拍摄设备同步拍摄。所述处理器12用于:基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态;基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述近红外影像时的位置和姿态。Embodiments of the present invention provide a ground station. The ground station is based on the embodiment of Figure 11, The communication interface 11 is configured to: acquire a visible light image captured by a first photographing device carried by an aircraft, and obtain a near-infrared image obtained by the second photographing device, the first photographing device and the first photographing The device shoots simultaneously. The processor 12 is configured to calculate, according to a preset image processing algorithm, a position and a posture of the first photographing device when capturing the visible light image; and the first photographing device and the second based on pre-calibration The relative positional relationship between the photographing devices is used to calculate the position and posture of the second photographing device when the near-infrared image is captured.
可选的,所述处理器12用于:所述可见光影像在所述地形表面上的投影进行拼接处理,获得可见光输出影像;基于对所述可见光影像在所述地形表面上的投影进行拼接时采用的拼接线,对所述近红外影像在所述地形表面上的投影进行拼接,获得近红外输出影像。Optionally, the processor 12 is configured to: perform a splicing process on the projection of the visible light image on the surface of the terrain to obtain a visible light output image; and perform splicing based on the projection of the visible light image on the surface of the terrain The stitching line is used to splicing the projection of the near-infrared image on the surface of the terrain to obtain a near-infrared output image.
可选的,所述显示组件13用于:显示所述可见光输出影像和/或所述近红外输出影像。Optionally, the display component 13 is configured to: display the visible light output image and/or the near infrared output image.
可选的,所述处理器13还用于:基于所述可见光输出影像和所述近红外输出影像,计算植被覆盖指数NDVI和/或强型植被指数EVI,并基于计算获得的NDVI和/或EVI,绘制相应的指数图。Optionally, the processor 13 is further configured to: calculate a vegetation coverage index NDVI and/or a strong vegetation index EVI based on the visible light output image and the near-infrared output image, and calculate the obtained NDVI and/or EVI, draw the corresponding index map.
可选的,所述显示组件13还用于:显示所述指数图。Optionally, the display component 13 is further configured to: display the index map.
可选的,所述处理器13还用于:基于所述指数图,分析植被的生长状况,并输出分析结果。Optionally, the processor 13 is further configured to: analyze the growth status of the vegetation based on the index map, and output the analysis result.
可选的,所述处理器12用于:基于所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;基于所述近红外影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;基于所述代价函数对所述近红外影像在所述地形表面上的投影进行拼接,获得近红外输出影像。Optionally, the processor 12 is configured to perform a dense matching based on a position and a posture of the first photographing device when capturing the visible light image, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the infrared image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; splicing the projection of the near-infrared image on the surface of the terrain based on the cost function to obtain a near-infrared Output image.
可选的,所述处理器12用于:基于所述第二拍摄设备在拍摄所述近红外影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;基于所述近红外影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;基于所述代价函数对所述近红外影像在所述地形表面上的投影进行拼接,获得近红外输出影像。Optionally, the processor 12 is configured to perform a dense matching based on a position and a posture of the second photographing device when the near-infrared image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a near-infrared image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; stitching the projection of the near-infrared image on the surface of the terrain based on the cost function to obtain a near Infrared output image.
可选的,所述第一拍摄设备为广角相机,所述第二拍摄设备为近红 外相机。Optionally, the first photographing device is a wide-angle camera, and the second photographing device is near red Outside camera.
本实施例提供的地面站能够用于执行图6实施例的方法,其执行方式和有益效果类似,在这里不再赘述。The ground station provided in this embodiment can be used to perform the method in the embodiment of FIG. 6, and the execution manner and the beneficial effects are similar, and details are not described herein again.
本发明实施例提供一种地面站。该地面站在图11实施例的基础上,所述通信接口11用于:获取飞行器搭载的第一拍摄设备拍摄获得的可见光影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的红外影像,所述第一拍摄设备和所述第二拍摄设备同步拍摄。所述处理器12用于:基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态;基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述红外影像时的位置和姿态。Embodiments of the present invention provide a ground station. The ground station is based on the embodiment of FIG. 11 , and the communication interface 11 is configured to: acquire a visible light image obtained by the first imaging device mounted on the aircraft, and acquire an infrared image obtained by the second imaging device mounted on the aircraft. The first photographing device and the second photographing device are simultaneously photographed. The processor 12 is configured to calculate, according to a preset image processing algorithm, a position and a posture of the first photographing device when capturing the visible light image; and the first photographing device and the second based on pre-calibration The relative positional relationship between the photographing devices is used to calculate the position and posture of the second photographing device when the infrared image is captured.
可选的,所述处理器12用于:对所述可见光影像在所述地形表面上的投影进行拼接处理,获得可见光输出影像;基于对所述可见光影像在所述地形表面上的投影进行拼接时采用的拼接线,对所述红外影像的投影进行拼接,获得红外输出影像。Optionally, the processor 12 is configured to: perform splicing processing on the projection of the visible light image on the terrain surface to obtain a visible light output image; and perform splicing based on the projection of the visible light image on the terrain surface The stitching line used is used to splicing the projection of the infrared image to obtain an infrared output image.
可选的,显示组件13用于:显示所述红外输出影像和/或所述可见光输出影像。Optionally, the display component 13 is configured to: display the infrared output image and/or the visible light output image.
可选的,所述处理器12用于:在所述第二拍摄设备拍摄获得的红外影像或者所述红外输出影像中识别出热源物体的位置。Optionally, the processor 12 is configured to: identify a location of the heat source object in the infrared image captured by the second photographing device or the infrared output image.
可选的,所述显示组件13还用于:显示所述热源物体在所述红外影像或所述红外输出影像中的位置。Optionally, the display component 13 is further configured to: display a position of the heat source object in the infrared image or the infrared output image.
可选的,所述热源物体包括电力线。Optionally, the heat source object comprises a power line.
可选的,所述处理器12用于:基于所述第二拍摄设备在拍摄所述红外影像时的位置和姿态,以及预设电力线数学模型,对识别出的电力线进行建模,形成电力线图层;显示组件13用于:在所述可见光输出影像上叠加显示所述电力线图层。Optionally, the processor 12 is configured to: according to the position and posture of the second photographing device when capturing the infrared image, and the preset power line mathematical model, model the identified power line to form a power line diagram The display component 13 is configured to: superimpose and display the power line layer on the visible light output image.
可选的,所述处理器12用于:基于所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;基于所述红外影像在所述地形表面上的投影,以及所述稠密匹配 生成的点云,构建代价函数;基于所述代价函数对所述红外影像在所述地形表面上的投影进行拼接,获得红外输出影像。Optionally, the processor 12 is configured to: perform dense matching based on a position and a posture of the first photographing device when capturing the visible light image, to generate a corresponding dense point cloud or a semi-dense point cloud; Projection of the image on the surface of the terrain, and the dense matching Generating a point cloud to construct a cost function; splicing the projection of the infrared image on the surface of the terrain based on the cost function to obtain an infrared output image.
可选的,所述处理器12用于:基于所述第二拍摄设备在拍摄所述红外影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;基于所述红外影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;基于所述代价函数对所述红外影像在所述地形表面上的投影进行拼接,获得红外输出影像。Optionally, the processor 12 is configured to perform a dense matching based on a position and a posture of the second photographing device when the infrared image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; and stitching the projection of the infrared image on the surface of the terrain based on the cost function to obtain an infrared output image.
可选的,所述第一拍摄设备为广角相机,所述第二拍摄设备为红外相机。Optionally, the first photographing device is a wide-angle camera, and the second photographing device is an infrared camera.
本实施例提供的地面站能够用于执行图8实施例的方法,其执行方式和有益效果类似,在这里不再赘述。The ground station provided by this embodiment can be used to perform the method of the embodiment of FIG. 8 , and the execution manner and the beneficial effects are similar, and details are not described herein again.
本发明实施例提供一种地面站。图12为本发明实施例提供的地面站的结构示意图,如图12所示,地面站20包括:通信接口21、一个或多个处理器22;所述一个或多个处理器22单独或协同工作,所述通信接口21和所述处理器22连接;所述通信接口21用于:获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值,其中,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔与所述飞行器相对于地面的飞行高度关联;所述处理器22用于:基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态;所述处理器22用于:基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;所述处理器22用于:基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。Embodiments of the present invention provide a ground station. FIG. 12 is a schematic structural diagram of a ground station according to an embodiment of the present invention. As shown in FIG. 12, the ground station 20 includes: a communication interface 21, one or more processors 22; and the one or more processors 22 are separately or cooperatively Working, the communication interface 21 is connected to the processor 22; the communication interface 21 is configured to: acquire a first image captured by a first photographing device mounted on the aircraft, and acquire a second photographing device mounted on the aircraft. a second image, wherein an FOV of the first photographing device is greater than or equal to a preset threshold, wherein a photographing interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft relative to the ground The processor 22 is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured and a position of the second photographing device when the second image is photographed And a gesture; the processor 22 is configured to: generate a terrain surface based on a position and a posture of the first image and the first photographing device when the first image is captured; 22: for performing projection and splicing processing on the second image on the terrain surface based on a position and a posture of the second photographing device when the second image is captured, to obtain an output image.
可选的,当所述飞行器相对于地表以固定的相对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以相同的拍摄间隔进行拍摄。Optionally, when the aircraft is flying at a fixed relative height with respect to the ground surface, the first photographing device and the second photographing device perform photographing at the same photographing interval in the horizontal direction.
可选的,当所述飞行器相对于地表高度改变时,所述第一拍摄设备 和所述第二拍摄设备的拍摄间隔改变。Optionally, when the aircraft changes in height relative to the surface, the first photographing device The shooting interval of the second photographing device changes.
可选的,当所述飞行器以统一的绝对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影像重叠率,以及所述飞行器与地表的相对高度关联。Optionally, when the aircraft is flying at a uniform absolute height, the first photographing device and the second photographing device are photographed in a horizontal direction at time-lapse photographing intervals, wherein the photographing interval is in advance The configured image overlap rate and the relative height of the aircraft to the surface.
本实施例提供的地面站能够用于执行图9实施例的方法,其执行方式和有益效果类似,在这里不再赘述。The ground station provided by this embodiment can be used to perform the method of the embodiment of FIG. 9 , and the execution manner and the beneficial effects are similar, and details are not described herein again.
本发明实施例提供一种飞行器控制器,该飞行器控制器可以是上述实施例所述的飞行器控制器。图13为本发明实施例提供的飞行器控制器的结构示意图,如图13所示,飞行器控制器30包括:通信接口31、一个或多个处理器32;一个或多个处理器单独或协同工作,通信接口31和处理器32连接;所述通信接口31用于:获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值;所述处理器32用于:基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态;所述处理器32用于:基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;所述处理器32还用于:基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。An embodiment of the present invention provides an aircraft controller, which may be the aircraft controller described in the above embodiments. 13 is a schematic structural diagram of an aircraft controller according to an embodiment of the present invention. As shown in FIG. 13, the aircraft controller 30 includes: a communication interface 31, one or more processors 32; and one or more processors work alone or in cooperation. The communication interface 31 is connected to the processor 32. The communication interface 31 is configured to: acquire a first image captured by a first camera mounted on the aircraft, and acquire a second image obtained by the second camera mounted on the aircraft, where The FOV of the first photographing device is greater than or equal to a preset threshold; the processor 32 is configured to: calculate a position and a posture and a posture of the first photographing device when the first image is captured based on a preset algorithm a position and a posture of the second photographing device when the second image is captured; the processor 32 is configured to: based on the position of the first image and the first photographing device when photographing the first image, a gesture, generating a terrain surface; the processor 32 is further configured to: perform a second position on the terrain surface based on a position and a posture of the second photographing device when the second image is captured The image is projected and stitched to obtain an output image.
可选的,所述第一拍摄设备的FOV大于所述第二拍摄设备的FOV。Optionally, the FOV of the first photographing device is greater than the FOV of the second photographing device.
可选的,所述第二拍摄设备的FOV小于所述预设阈值。Optionally, the FOV of the second photographing device is less than the preset threshold.
可选的,所述处理器32用于:基于预先设定的像控点的GPS信息,将所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,以及所述第二拍摄设备在拍摄所述第二影像时的位置和姿,转换为世界坐标系下的位置和姿态。Optionally, the processor 32 is configured to: position and posture of the first photographing device when the first image is captured, and the second photographing based on GPS information of a preset image control point The position and posture of the device when the second image is captured are converted into a position and a posture in the world coordinate system.
可选的,所述处理器32用于:基于预先设定的像控点的GPS信息,确定所述像控点在所述第一拍摄设备拍摄获得的第一影像中的相对位置;基于所述像控点在所述第一影像中的相对位置,以及所述像控点的 GPS信息,将所述第一拍摄设备在拍摄所述第一影像时的位置和姿态转换为世界坐标系下的位置和姿态;基于预先标定的第一拍摄设备和所述第二拍摄设备之间的相对位置关系,将所述第二拍摄设备在拍摄所述第二影像时的位置和姿态转换为世界坐标系下的位置和姿态。Optionally, the processor 32 is configured to: determine, according to GPS information of a preset image control point, a relative position of the image control point in the first image captured by the first photographing device; Determining a relative position of the image control point in the first image, and the image control point GPS information, converting a position and a posture of the first photographing device when the first image is captured into a position and a posture in a world coordinate system; based on a pre-calibrated first photographing device and the second photographing device The relative positional relationship converts the position and posture of the second photographing device when the second image is captured into a position and a posture in the world coordinate system.
可选的,所述处理器32用于:以预先设定的像控点作为约束条件,采用运动恢复结构SFM算法计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态;基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述第二影像时的位置和姿态。Optionally, the processor 32 is configured to: use a motion recovery structure SFM algorithm to calculate a position and a posture of the first photographing device when the first image is captured, by using a preset image control point as a constraint condition; And calculating a position and a posture of the second photographing device when the second image is captured based on a relative positional relationship between the first photographing device and the second photographing device that is pre-calibrated.
可选的,所述处理器32用于:基于所述第一拍摄设备在拍摄所述第一影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;基于稠密匹配生成的点云拟合形成地形表面。Optionally, the processor 32 is configured to perform a dense matching based on a position and a posture of the first photographing device when the first image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; The resulting point cloud fits to form a terrain surface.
可选的,所述处理器32用于:从稠密匹配生成的点云中提取地面点;Optionally, the processor 32 is configured to: extract a ground point from a point cloud generated by the dense matching;
基于提取出的地面点拟合形成地形表面。A terrain surface is formed based on the extracted ground point fit.
可选的,所述处理器32还包括:对所述第二影像在所述表面上的投影进行全局色彩和/或亮度调整。Optionally, the processor 32 further includes: performing global color and/or brightness adjustment on a projection of the second image on the surface.
可选的,所述预设的图像处理算法包括如下任意一种:空中三角测量、从运动恢复结构SFM的算法、即时定位与地图构建SLAM算法。Optionally, the preset image processing algorithm includes any one of the following: an aerial triangulation, an algorithm for recovering a structure SFM from motion, a real-time positioning, and a map construction SLAM algorithm.
可选的,所述输出影像包括正射影像。Optionally, the output image includes an orthophoto.
可选的,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔与所述飞行器相对于地面的飞行高度关联。Optionally, the shooting interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft relative to the ground.
可选的,当所述飞行器相对于地表以固定的相对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上分别以相同的拍摄间隔进行拍摄。Optionally, when the aircraft is flying at a fixed relative height with respect to the ground surface, the first photographing device and the second photographing device respectively photograph at the same photographing interval in the horizontal direction.
可选的,当所述飞行器相对于地表高度改变时,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔改变。Optionally, when the aircraft changes in height relative to the surface, the shooting interval of the first photographing device and the second photographing device changes.
可选的,当所述飞行器以统一的绝对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影像重叠率,以及所述飞行器与地表的 相对高度关联。Optionally, when the aircraft is flying at a uniform absolute height, the first photographing device and the second photographing device are photographed in a horizontal direction at time-lapse photographing intervals, wherein the photographing interval is in advance Configured image overlap rate, as well as the aircraft and the surface Relatively highly correlated.
本实施例提供的飞行器控制器能够执行图1实施例的技术方案,其执行方式和有益效果类似,在这里不再赘述。The aircraft controller provided in this embodiment can perform the technical solution of the embodiment of FIG. 1 , and the execution manner and the beneficial effects are similar, and details are not described herein again.
本发明实施例提供一种飞行器控制器。该飞行器控制器在图13实施例的基础上,所述通信接口31用于:获取飞行器搭载的第一拍摄设备拍摄获得的第一可见光影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二可见光影像,所述第一拍摄设备和所述第二拍摄设备同步拍摄。处理器32用于:基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态;基于预先标定的所述第一拍摄设备和所述第二拍摄设备的相对位置关系,计算所述第二拍摄设备在拍摄所述第二可见光影像时的位置和姿态。Embodiments of the present invention provide an aircraft controller. The aircraft controller is based on the embodiment of FIG. 13 , the communication interface 31 is configured to: acquire a first visible light image captured by a first imaging device mounted on the aircraft, and acquire a second captured image obtained by the second imaging device mounted on the aircraft The two visible light images are captured by the first photographing device and the second photographing device. The processor 32 is configured to calculate, according to a preset image processing algorithm, a position and a posture of the first photographing device when the first visible light image is captured; and the first photographing device and the second based on the pre-calibration A relative positional relationship of the photographing device is calculated, and a position and a posture of the second photographing device when the second visible light image is captured are calculated.
可选的,处理器32用于:基于对所述第一可见光影像在所述地形表面上的投影进行拼接时采用的拼接线,对所述第二可见光影像在所述地形表面上的投影进行拼接,获得所述第二拍摄设备对应的可见光输出影像。Optionally, the processor 32 is configured to: perform a projection on the terrain surface of the second visible light image based on a splicing line used when splicing the projection of the first visible light image on the terrain surface Splicing, obtaining a visible light output image corresponding to the second photographing device.
可选的,所述处理器32用于:基于所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;基于所述第二可见光影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;基于所述代价函数对所述第二可见光影像在所述地形表面上的投影进行拼接,获得所述第二拍摄设备对应的可见光输出影像。Optionally, the processor 32 is configured to perform a dense matching based on a position and a posture of the first photographing device when the first visible light image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the second visible light image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; stitching the projection of the second visible light image on the terrain surface based on the cost function Obtaining a visible light output image corresponding to the second photographing device.
可选的,所述处理器32用于:基于所述第二拍摄设备在拍摄所述第二可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;基于所述第二可见光影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;基于所述代价函数对所述第二可见光影像在所述地形表面上的投影进行拼接,获得所述第二拍摄设备对应的可见光输出影像。Optionally, the processor 32 is configured to perform a dense matching based on a position and a posture of the second photographing device when the second visible light image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the second visible light image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; stitching the projection of the second visible light image on the terrain surface based on the cost function Obtaining a visible light output image corresponding to the second photographing device.
可选的,所述处理器32还用于:基于所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态,将所述第一可见光影像投影到所述地 形表面。Optionally, the processor 32 is further configured to: project the first visible light image to the ground based on a position and a posture of the first photographing device when the first visible light image is captured. Shaped surface.
可选的,所述处理器32还用于:对所述第二可见光影像在所述地形表面上的投影进行正射处理。Optionally, the processor 32 is further configured to orthographically process the projection of the second visible light image on the terrain surface.
可选的,所述第一拍摄设备为广角相机,所述第二拍摄设备为长焦相机。Optionally, the first photographing device is a wide-angle camera, and the second photographing device is a telephoto camera.
本实施例提供的飞行器控制器能够用于执行图5实施例的方法,其执行方式和有益效果类似,这里不再赘述。The aircraft controller provided in this embodiment can be used to perform the method of the embodiment of FIG. 5, and the execution manner and the beneficial effects are similar, and details are not described herein again.
本发明实施例提供一种飞行器控制器。该飞行器控制器在图13实施例的基础上,所述通信接口31用于:获取飞行器搭载的第一拍摄设备拍摄获得的可见光影像,以及所述所述第二拍摄设备拍摄获得的近红外影像,所述第一拍摄设备和所述第一拍摄设备同步拍摄。所述处理器32用于:基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态;基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述近红外影像时的位置和姿态。Embodiments of the present invention provide an aircraft controller. The aircraft controller is based on the embodiment of FIG. 13 , the communication interface 31 is configured to: acquire a visible light image obtained by a first photographing device mounted on an aircraft, and a near infrared image obtained by the second photographing device The first photographing device and the first photographing device are photographed simultaneously. The processor 32 is configured to calculate, according to a preset image processing algorithm, a position and a posture of the first photographing device when the visible light image is captured; and the first photographing device and the second The relative positional relationship between the photographing devices is used to calculate the position and posture of the second photographing device when the near-infrared image is captured.
可选的,所述处理器32用于:所述可见光影像在所述地形表面上的投影进行拼接处理,获得可见光输出影像;基于对所述可见光影像在所述地形表面上的投影进行拼接时采用的拼接线,对所述近红外影像在所述地形表面上的投影进行拼接,获得近红外输出影像。Optionally, the processor 32 is configured to: perform splicing processing on the projection of the visible light image on the surface of the terrain to obtain a visible light output image; and perform splicing based on the projection of the visible light image on the terrain surface The stitching line is used to splicing the projection of the near-infrared image on the surface of the terrain to obtain a near-infrared output image.
可选的,所述处理器13还用于:基于所述可见光输出影像和所述近红外输出影像,计算植被覆盖指数NDVI和/或强型植被指数EVI,并基于计算获得的NDVI和/或EVI,绘制相应的指数图。Optionally, the processor 13 is further configured to: calculate a vegetation coverage index NDVI and/or a strong vegetation index EVI based on the visible light output image and the near-infrared output image, and calculate the obtained NDVI and/or EVI, draw the corresponding index map.
可选的,所述处理器32还用于:基于所述指数图,分析植被的生长状况,并输出分析结果。Optionally, the processor 32 is further configured to: analyze the growth status of the vegetation based on the index map, and output the analysis result.
可选的,所述处理器32用于:基于所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;基于所述近红外影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;基于所述代价函数对所述近红外影像在所述地形表面上的投影进行拼接,获得近红外输出影像。 Optionally, the processor 32 is configured to perform a dense matching based on a position and a posture of the first photographing device when capturing the visible light image, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the infrared image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; splicing the projection of the near-infrared image on the surface of the terrain based on the cost function to obtain a near-infrared Output image.
可选的,所述处理器32用于:基于所述第二拍摄设备在拍摄所述近红外影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;基于所述近红外影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;基于所述代价函数对所述近红外影像在所述地形表面上的投影进行拼接,获得近红外输出影像。Optionally, the processor 32 is configured to perform a dense matching based on a position and a posture of the second photographing device when the near-infrared image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a near-infrared image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; stitching the projection of the near-infrared image on the surface of the terrain based on the cost function to obtain a near Infrared output image.
可选的,所述第一拍摄设备为广角相机,所述第二拍摄设备为近红外相机。Optionally, the first photographing device is a wide-angle camera, and the second photographing device is a near-infrared camera.
本实施例提供的飞行器控制器能够用于执行图6实施例的方法,其执行方式和有益效果类似,在这里不再赘述。The aircraft controller provided by this embodiment can be used to perform the method of the embodiment of FIG. 6, and the execution manner and the beneficial effects are similar, and details are not described herein again.
本发明实施例提供一种飞行器控制器。该飞行器控制器在图13实施例的基础上,所述通信接口31用于:获取飞行器搭载的第一拍摄设备拍摄获得的可见光影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的红外影像,所述第一拍摄设备和所述第二拍摄设备同步拍摄。所述处理器32用于:基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态;基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述红外影像时的位置和姿态。Embodiments of the present invention provide an aircraft controller. The aircraft controller is based on the embodiment of FIG. 13 , the communication interface 31 is configured to: acquire a visible light image captured by a first photographing device carried by the aircraft, and acquire an infrared image obtained by the second photographing device mounted on the aircraft, The first photographing device and the second photographing device are simultaneously photographed. The processor 32 is configured to calculate, according to a preset image processing algorithm, a position and a posture of the first photographing device when the visible light image is captured; and the first photographing device and the second The relative positional relationship between the photographing devices is used to calculate the position and posture of the second photographing device when the infrared image is captured.
可选的,所述处理器32用于:对所述可见光影像在所述地形表面上的投影进行拼接处理,获得可见光输出影像;基于对所述可见光影像在所述地形表面上的投影进行拼接时采用的拼接线,对所述红外影像的投影进行拼接,获得红外输出影像。Optionally, the processor 32 is configured to: perform splicing processing on the projection of the visible light image on the terrain surface to obtain a visible light output image; and perform splicing based on the projection of the visible light image on the terrain surface The stitching line used is used to splicing the projection of the infrared image to obtain an infrared output image.
可选的,所述处理器32用于:在所述第二拍摄设备拍摄获得的红外影像或者所述红外输出影像中识别出热源物体的位置。Optionally, the processor 32 is configured to: identify a location of the heat source object in the infrared image captured by the second photographing device or the infrared output image.
可选的,所述热源物体包括电力线。Optionally, the heat source object comprises a power line.
可选的,所述处理器32用于:基于所述第二拍摄设备在拍摄所述红外影像时的位置和姿态,以及预设电力线数学模型,对识别出的电力线进行建模,形成电力线图层;显示组件13用于:在所述可见光输出影像上叠加显示所述电力线图层。Optionally, the processor 32 is configured to: according to the position and posture of the second photographing device when the infrared image is captured, and the preset power line mathematical model, model the identified power line to form a power line diagram The display component 13 is configured to: superimpose and display the power line layer on the visible light output image.
可选的,所述处理器32用于:基于所述第一拍摄设备在拍摄所述可 见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;基于所述红外影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;基于所述代价函数对所述红外影像在所述地形表面上的投影进行拼接,获得红外输出影像。Optionally, the processor 32 is configured to: according to the first photographing device, photographing the The position and posture of the light image are densely matched to generate a corresponding dense point cloud or semi-dense point cloud; based on the projection of the infrared image on the surface of the terrain, and the point cloud generated by the dense matching, the cost is constructed a function; splicing a projection of the infrared image on the surface of the terrain based on the cost function to obtain an infrared output image.
可选的,所述处理器32用于:基于所述第二拍摄设备在拍摄所述红外影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;基于所述红外影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;基于所述代价函数对所述红外影像在所述地形表面上的投影进行拼接,获得红外输出影像。Optionally, the processor 32 is configured to: perform dense matching based on a position and a posture of the second photographing device when the infrared image is captured, to generate a corresponding dense point cloud or a semi-dense point cloud; Projecting a projection of the image on the surface of the terrain, and a point cloud generated by the dense matching, constructing a cost function; and stitching the projection of the infrared image on the surface of the terrain based on the cost function to obtain an infrared output image.
可选的,所述第一拍摄设备为广角相机,所述第二拍摄设备为红外相机。Optionally, the first photographing device is a wide-angle camera, and the second photographing device is an infrared camera.
本实施例提供的飞行器控制器能够用于执行图8实施例的方法,其执行方式和有益效果类似,在这里不再赘述。The aircraft controller provided by this embodiment can be used to perform the method of the embodiment of FIG. 8 , and the execution manner and the beneficial effects are similar, and details are not described herein again.
本发明实施例提供一种飞行器控制器。图14为本发明实施例提供的飞行器控制器的结构示意图,如图14所示,飞行器控制器40包括:通信接口41、一个或多个处理器42;所述一个或多个处理器42单独或协同工作,所述通信接口41和所述处理器42连接;所述通信接口41用于:获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值,其中,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔与所述飞行器相对于地面的飞行高度关联;所述处理器42用于:基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态;所述处理器42用于:基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;所述处理器42用于:基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。Embodiments of the present invention provide an aircraft controller. 14 is a schematic structural diagram of an aircraft controller according to an embodiment of the present invention. As shown in FIG. 14, the aircraft controller 40 includes: a communication interface 41, one or more processors 42; and the one or more processors 42 are separate Or cooperatively working, the communication interface 41 is connected to the processor 42; the communication interface 41 is configured to: acquire a first image captured by a first photographing device mounted on the aircraft, and acquire a second photographing device mounted on the aircraft. a second image obtained, wherein an FOV of the first photographing device is greater than or equal to a preset threshold, wherein a photographing interval of the first photographing device and the second photographing device and a flight of the aircraft relative to the ground The processor 42 is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured, and when the second photographing device captures the second image Position and posture; the processor 42 is configured to: generate a terrain surface based on the position and posture of the first image and the first photographing device when the first image is captured The processor 42 is configured to: perform projection and splicing processing on the second image on the terrain surface based on a position and a posture of the second photographing device when the second image is captured, to obtain an output image. .
可选的,当所述飞行器相对于地表以固定的相对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以相同的拍摄间隔进行 拍摄。Optionally, when the aircraft is flying at a fixed relative height with respect to the ground surface, the first photographing device and the second photographing device are performed at the same photographing interval in the horizontal direction. Shooting.
可选的,当所述飞行器相对于地表高度改变时,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔改变。Optionally, when the aircraft changes in height relative to the surface, the shooting interval of the first photographing device and the second photographing device changes.
可选的,当所述飞行器以统一的绝对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影像重叠率,以及所述飞行器与地表的相对高度关联。Optionally, when the aircraft is flying at a uniform absolute height, the first photographing device and the second photographing device are photographed in a horizontal direction at time-lapse photographing intervals, wherein the photographing interval is in advance The configured image overlap rate and the relative height of the aircraft to the surface.
本实施例提供的飞行器控制器能够用于执行图9实施例的方法,其执行方式和有益效果类似,在这里不再赘述。The aircraft controller provided in this embodiment can be used to perform the method of the embodiment of FIG. 9 , and the execution manner and the beneficial effects are similar, and details are not described herein again.
本发明实施例提供一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行上述实施例提供的输出影像生成方法。The embodiment of the present invention provides a computer readable storage medium, including instructions, when executed on a computer, causing a computer to execute the output image generating method provided by the foregoing embodiment.
本发明实施例提供一种无人机。该无人机包括机身;机身;动力系统,安装在所述机身,用于提供飞行动力;第一拍摄设备和第二拍摄设备,安装在所述机身,用于拍摄影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值;以及如上述实施例所述的飞行器控制器。Embodiments of the present invention provide a drone. The drone includes a fuselage; a fuselage; a power system mounted on the body for providing flight power; and a first photographing device and a second photographing device mounted on the body for capturing images, wherein the drone; The FOV of the first photographing device is greater than or equal to a preset threshold; and the aircraft controller as described in the above embodiments.
其中,本实施例提供的无人机,其执行方式和有益效果与上述实施例所涉及的飞行器控制器相同,在这里不再赘述。The execution mode and the beneficial effects of the unmanned aerial vehicle provided in this embodiment are the same as those of the aircraft controller in the foregoing embodiment, and are not described herein again.
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner, for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。 The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium. The above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps. The foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。A person skilled in the art can clearly understand that for the convenience and brevity of the description, only the division of each functional module described above is exemplified. In practical applications, the above function assignment can be completed by different functional modules as needed, that is, the device is installed. The internal structure is divided into different functional modules to perform all or part of the functions described above. For the specific working process of the device described above, refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。 Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, and are not intended to be limiting; although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that The technical solutions described in the foregoing embodiments may be modified, or some or all of the technical features may be equivalently replaced; and the modifications or substitutions do not deviate from the technical solutions of the embodiments of the present invention. range.

Claims (135)

  1. 一种输出影像生成方法,其特征在于,包括:An output image generating method, comprising:
    获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的视场角FOV大于或等于预设阈值;Obtaining a first image obtained by the first photographing device carried by the aircraft, and acquiring a second image obtained by the second photographing device mounted on the aircraft, wherein the first photographing device has a field of view angle FOV greater than or equal to a preset threshold ;
    基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态;Calculating a position and a posture of the first photographing device when the first image is captured and a position and a posture of the second photographing device when photographing the second image, based on a preset algorithm;
    基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;Generating a terrain surface based on the position and posture of the first image and the first photographing device when the first image is captured;
    基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。And based on the position and posture of the second photographing device when the second image is captured, the second image is projected and stitched on the terrain surface to obtain an output image.
  2. 根据权利要求1所述的方法,其特征在于,所述第一拍摄设备的FOV大于所述第二拍摄设备的FOV。The method of claim 1 wherein the FOV of the first photographing device is greater than the FOV of the second photographing device.
  3. 根据权利要求2所述的方法,其特征在于,所述第二拍摄设备的FOV小于所述预设阈值。The method of claim 2 wherein the FOV of the second photographing device is less than the predetermined threshold.
  4. 根据权利要求3所述的方法,其特征在于,所述获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,包括:The method according to claim 3, wherein the acquiring the first image obtained by the first photographing device carried by the aircraft and acquiring the second image obtained by the second photographing device carried by the aircraft comprises:
    获取飞行器搭载的第一拍摄设备拍摄获得的第一可见光影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二可见光影像,所述第一拍摄设备和所述第二拍摄设备同步拍摄。Obtaining a first visible light image obtained by the first photographing device mounted on the aircraft, and acquiring a second visible light image obtained by the second photographing device mounted on the aircraft, the first photographing device and the second photographing device simultaneously photographing.
  5. 根据权利要求4所述的方法,其特征在于,所述基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,包括:The method according to claim 4, wherein the calculating, based on a preset algorithm, a position and a posture of the first photographing device when the first image is captured and the second photographing device are photographing the photographing The position and posture of the second image, including:
    基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态;Calculating a position and a posture of the first photographing device when the first visible light image is captured based on a preset image processing algorithm;
    基于预先标定的所述第一拍摄设备和所述第二拍摄设备的相对位置关系,计算所述第二拍摄设备在拍摄所述第二可见光影像时的位置和姿态。And calculating a position and a posture of the second photographing device when the second visible light image is captured based on a relative positional relationship between the first photographing device and the second photographing device that is pre-calibrated.
  6. 根据权利要求4所述的方法,其特征在于,所述在所述地形表面 上对所述第二影像进行拼接处理,包括:The method of claim 4 wherein said surface of said terrain Performing splicing processing on the second image, including:
    基于对所述第一可见光影像在所述地形表面上的投影进行拼接时采用的拼接线,对所述第二可见光影像在所述地形表面上的投影进行拼接,获得所述第二拍摄设备对应的可见光输出影像。And splicing the projection of the second visible light image on the terrain surface according to a splicing line used for splicing the projection of the first visible light image on the surface of the terrain, and obtaining the corresponding second shooting device Visible light output image.
  7. 根据权利要求4所述的方法,其特征在于,所述在所述地形表面上对所述第二影像进行拼接处理,包括:The method according to claim 4, wherein said splicing said second image on said surface of said terrain comprises:
    基于所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;Performing dense matching based on the position and posture of the first photographing device when photographing the first visible light image to generate a corresponding dense point cloud or semi-dense point cloud;
    基于所述第二可见光影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;Constructing a cost function based on a projection of the second visible light image on the terrain surface and a point cloud generated by the dense matching;
    基于所述代价函数对所述第二可见光影像在所述地形表面上的投影进行拼接,获得所述第二拍摄设备对应的可见光输出影像。The projection of the second visible light image on the terrain surface is spliced based on the cost function, and the visible light output image corresponding to the second imaging device is obtained.
  8. 根据权利要求4所述的方法,其特征在于,所述方法还包括:The method of claim 4, wherein the method further comprises:
    基于所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态,将所述第一可见光影像投影到所述地形表面。The first visible light image is projected onto the terrain surface based on a position and a posture of the first photographing device when the first visible light image is captured.
  9. 根据权利要求6或7所述的方法,其特征在于,所述方法还包括:The method according to claim 6 or 7, wherein the method further comprises:
    对所述第二可见光影像在所述地形表面上的投影进行正射处理。Projecting the projection of the second visible light image on the surface of the terrain to orthographic processing.
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,所述第一拍摄设备为广角相机,所述第二拍摄设备为长焦相机。The method according to any one of claims 1 to 9, wherein the first photographing device is a wide-angle camera and the second photographing device is a telephoto camera.
  11. 根据权利要求1所述的方法,其特征在于,所述获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,包括:The method according to claim 1, wherein the acquiring the first image obtained by the first photographing device carried by the aircraft and acquiring the second image obtained by the second photographing device carried by the aircraft comprises:
    获取飞行器搭载的第一拍摄设备拍摄获得的可见光影像,以及所述所述第二拍摄设备拍摄获得的近红外影像,所述第一拍摄设备和所述第一拍摄设备同步拍摄。Obtaining a visible light image obtained by the first photographing device mounted on the aircraft, and a near-infrared image obtained by the second photographing device, the first photographing device and the first photographing device simultaneously photographing.
  12. 根据权利要求11所述的方法,其特征在于,所述基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,包括:The method according to claim 11, wherein the calculating, based on a preset algorithm, a position and a posture of the first photographing device when the first image is captured and the second photographing device are photographing the photographing The position and posture of the second image, including:
    基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述可见 光影像时的位置和姿态;Calculating the visible by the first photographing device based on a preset image processing algorithm Position and posture of the light image;
    基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述近红外影像时的位置和姿态。And calculating a position and a posture of the second photographing device when photographing the near-infrared image based on a relative positional relationship between the first photographing device and the second photographing device that is pre-calibrated.
  13. 根据权利要求11所述的方法,其特征在于,所述在所述地形表面上对所述第二影像进行拼接处理,包括:The method according to claim 11, wherein said splicing processing said second image on said terrain surface comprises:
    对所述可见光影像在所述地形表面上的投影进行拼接处理,获得可见光输出影像;Performing a splicing process on the projection of the visible light image on the surface of the terrain to obtain a visible light output image;
    基于对所述可见光影像在所述地形表面上的投影进行拼接时采用的拼接线,对所述近红外影像在所述地形表面上的投影进行拼接,获得近红外输出影像。The projection of the near-infrared image on the surface of the terrain is spliced based on a splicing line used for splicing the projection of the visible light image on the surface of the terrain to obtain a near-infrared output image.
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:The method of claim 13 wherein the method further comprises:
    显示所述可见光输出影像和/或所述近红外输出影像。Displaying the visible light output image and/or the near infrared output image.
  15. 根据权利要求13所述的方法,其特征在于,所述方法还包括:The method of claim 13 wherein the method further comprises:
    基于所述可见光输出影像和所述近红外输出影像,计算植被覆盖指数NDVI和/或强型植被指数EVI,并基于计算获得的NDVI和/或EVI,绘制相应的指数图。Calculating a vegetation coverage index NDVI and/or a strong vegetation index EVI based on the visible light output image and the near-infrared output image, and drawing a corresponding index map based on the calculated NDVI and/or EVI.
  16. 根据权利要求15所述的方法,其特征在于,所述方法还包括:The method of claim 15 wherein the method further comprises:
    显示所述指数图。The index map is displayed.
  17. 根据权利要求16所述的方法,其特征在于,所述方法还包括:The method of claim 16 wherein the method further comprises:
    基于所述指数图,分析植被的生长状况,并输出分析结果。Based on the index map, the growth state of the vegetation is analyzed, and the analysis result is output.
  18. 根据权利要求11所述的方法,其特征在于,所述在所述地形表面上对所述第二影像进行拼接处理,包括:The method according to claim 11, wherein said splicing processing said second image on said terrain surface comprises:
    基于所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;Performing dense matching based on the position and posture of the first photographing device when capturing the visible light image, and generating a corresponding dense point cloud or semi-dense point cloud;
    基于所述近红外影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;Constructing a cost function based on a projection of the near-infrared image on the surface of the terrain and a point cloud generated by the dense matching;
    基于所述代价函数对所述近红外影像在所述地形表面上的投影进行拼接,获得近红外输出影像。A projection of the near-infrared image on the surface of the terrain is spliced based on the cost function to obtain a near-infrared output image.
  19. 根据权利要求11-18中任一项所述的方法,其特征在于,所述第 一拍摄设备为广角相机,所述第二拍摄设备为近红外相机。Method according to any of claims 11-18, wherein said One camera is a wide-angle camera and the second camera is a near-infrared camera.
  20. 根据权利要求1所述的方法,其特征在于,所述获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,包括:The method according to claim 1, wherein the acquiring the first image obtained by the first photographing device carried by the aircraft and acquiring the second image obtained by the second photographing device carried by the aircraft comprises:
    获取飞行器搭载的第一拍摄设备拍摄获得的可见光影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的红外影像,所述第一拍摄设备和所述第二拍摄设备同步拍摄。Obtaining a visible light image obtained by the first photographing device mounted on the aircraft, and acquiring an infrared image obtained by the second photographing device mounted on the aircraft, the first photographing device and the second photographing device simultaneously photographing.
  21. 根据权利要求20所述的方法,其特征在于,所述基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,包括:The method according to claim 20, wherein the calculating, based on a preset algorithm, a position and a posture of the first photographing device when the first image is captured and the second photographing device are photographing the photographing The position and posture of the second image, including:
    基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态;Calculating a position and a posture of the first photographing device when the visible light image is captured based on a preset image processing algorithm;
    基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述红外影像时的位置和姿态。And calculating a position and a posture of the second photographing device when the infrared image is captured based on a relative positional relationship between the first photographing device and the second photographing device that is pre-calibrated.
  22. 根据权利要求20所述的方法,其特征在于,所述在所述地形表面上对所述第二影像进行拼接处理,包括:The method according to claim 20, wherein said splicing said second image on said terrain surface comprises:
    对所述可见光影像在所述地形表面上的投影进行拼接处理,获得可见光输出影像;Performing a splicing process on the projection of the visible light image on the surface of the terrain to obtain a visible light output image;
    基于对所述可见光影像在所述地形表面上的投影进行拼接时采用的拼接线,对所述红外影像的投影进行拼接,获得红外输出影像。And projecting the projection of the infrared image based on a splicing line used for splicing the projection of the visible light image on the surface of the terrain to obtain an infrared output image.
  23. 根据权利要求22所述的方法,其特征在于,所述方法还包括:显示所述红外输出影像和/或所述可见光输出影像。The method of claim 22, further comprising: displaying the infrared output image and/or the visible light output image.
  24. 根据权利要求22所述的方法,其特征在于,所述方法还包括:The method of claim 22, wherein the method further comprises:
    在所述第二拍摄设备拍摄获得的红外影像或者所述红外输出影像中识别出热源物体的位置。A position of the heat source object is recognized in the infrared image obtained by the second photographing device or the infrared output image.
  25. 根据权利要求24所述的方法,其特征在于,所述方法还包括:The method of claim 24, wherein the method further comprises:
    显示所述热源物体在所述红外影像或所述红外输出影像中的位置。A position of the heat source object in the infrared image or the infrared output image is displayed.
  26. 根据权利要求24所述的方法,其特征在于,所述热源物体包括电力线。 The method of claim 24 wherein said heat source object comprises a power line.
  27. 根据权利要求26所述的方法,其特征在于,所述方法还包括:The method of claim 26, wherein the method further comprises:
    基于所述第二拍摄设备在拍摄所述红外影像时的位置和姿态,以及预设电力线数学模型,对识别出的电力线进行建模,形成电力线图层;Determining the identified power line based on the position and posture of the second photographing device when the infrared image is captured, and the preset power line mathematical model to form a power line layer;
    在所述可见光输出影像上叠加显示所述电力线图层。The power line layer is superimposed on the visible light output image.
  28. 根据权利要求20所述的方法,其特征在于,所述在所述地形表面上对所述第二影像进行拼接处理,包括:The method according to claim 20, wherein said splicing said second image on said terrain surface comprises:
    基于所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;Performing dense matching based on the position and posture of the first photographing device when capturing the visible light image, and generating a corresponding dense point cloud or semi-dense point cloud;
    基于所述红外影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;Constructing a cost function based on a projection of the infrared image on the surface of the terrain and a point cloud generated by the dense matching;
    基于所述代价函数对所述红外影像在所述地形表面上的投影进行拼接,获得红外输出影像。The projection of the infrared image on the surface of the terrain is spliced based on the cost function to obtain an infrared output image.
  29. 根据权利要求20-28中任一项所述的方法,其特征在于,所述第一拍摄设备为广角相机,所述第二拍摄设备为红外相机。The method according to any one of claims 20 to 28, wherein the first photographing device is a wide-angle camera and the second photographing device is an infrared camera.
  30. 根据权利要求1所述的方法,其特征在于,所述基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态之后,所述方法还包括:The method according to claim 1, wherein the calculating, based on a preset algorithm, a position and a posture of the first photographing device when the first image is captured and the second photographing device are photographing the photographing After the position and posture of the second image, the method further includes:
    基于预先设定的像控点的GPS信息,将所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,以及所述第二拍摄设备在拍摄所述第二影像时的位置和姿,转换为世界坐标系下的位置和姿态。Positioning and posture of the first photographing device when photographing the first image, and a position of the second photographing device when photographing the second image, based on GPS information of a preset image control point The pose is converted to the position and posture in the world coordinate system.
  31. 根据权利要求30所述的方法,其特征在于,所述基于预先设定的像控点的GPS信息,将所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,以及所述第二拍摄设备在拍摄所述第二影像时的位置和姿,转换为世界坐标系下的位置和姿态,包括:The method according to claim 30, wherein said position and posture of said first photographing device when photographing said first image is based on GPS information of a preset image control point, and said The position and posture of the second photographing device when the second image is captured is converted into a position and a posture in the world coordinate system, including:
    基于预先设定的像控点的GPS信息,确定所述像控点在所述第一拍摄设备拍摄获得的第一影像中的相对位置;Determining a relative position of the image control point in the first image captured by the first photographing device based on GPS information of a preset image control point;
    基于所述像控点在所述第一影像中的相对位置,以及所述像控点的GPS信息,将所述第一拍摄设备在拍摄所述第一影像时的位置和姿态转换为世界坐标系下的位置和姿态;Converting a position and a posture of the first photographing device when the first image is captured into a world coordinate based on a relative position of the image control point in the first image and GPS information of the image control point Position and posture;
    基于预先标定的第一拍摄设备和所述第二拍摄设备之间的相对位置 关系,将所述第二拍摄设备在拍摄所述第二影像时的位置和姿态转换为世界坐标系下的位置和姿态。Based on a relative position between the first photographing device and the second photographing device that are pre-calibrated The relationship is that the position and posture of the second photographing device when the second image is captured is converted into a position and a posture in the world coordinate system.
  32. 根据权利要求31所述的方法,其特征在于,所述方法还包括:The method of claim 31, wherein the method further comprises:
    显示所述像控点在所述第一影像上的所述相对位置。Displaying the relative position of the image control point on the first image.
  33. 根据权利要求1所述的方法,其特征在于,所述基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,包括:The method according to claim 1, wherein the calculating, based on a preset algorithm, a position and a posture of the first photographing device when the first image is captured and the second photographing device are photographing the photographing The position and posture of the second image, including:
    以预先设定的像控点作为约束条件,采用运动恢复结构SFM算法计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态;Calculating a position and a posture of the first photographing device when the first image is captured by using a motion recovery structure SFM algorithm with a preset image control point as a constraint condition;
    基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述第二影像时的位置和姿态。And calculating a position and a posture of the second photographing device when the second image is captured based on a relative positional relationship between the first photographing device and the second photographing device that is pre-calibrated.
  34. 根据权利要求1所述的方法,其特征在于,所述基于所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面,包括:The method according to claim 1, wherein the generating a terrain surface based on the position and posture of the first photographing device when the first image is captured comprises:
    基于所述第一拍摄设备在拍摄所述第一影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;Performing dense matching based on the position and posture of the first photographing device when photographing the first image, and generating a corresponding dense point cloud or semi-dense point cloud;
    基于稠密匹配生成的点云拟合形成地形表面。A point cloud fit based on dense matching forms a terrain surface.
  35. 根据权利要求34所述的方法,其特征在于,所述基于稠密匹配生成的点云拟和形成地形表面,包括:The method according to claim 34, wherein the forming of the topographic surface by the point cloud based on the dense matching comprises:
    从稠密匹配生成的点云中提取地面点;Extracting ground points from a point cloud generated by dense matching;
    基于提取出的地面点拟合形成地形表面。A terrain surface is formed based on the extracted ground point fit.
  36. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1 further comprising:
    对所述第二影像在所述表面上的投影进行全局色彩和/或亮度调整。Global color and/or brightness adjustment is performed on the projection of the second image on the surface.
  37. 根据权利要求5或12或21所述的方法,其特征在于,所述预设的图像处理算法包括如下任意一种:空中三角测量、从运动恢复结构SFM的算法、即时定位与地图构建SLAM算法。The method according to claim 5 or 12 or 21, wherein the preset image processing algorithm comprises any one of the following: aerial triangulation, algorithm for recovering structure SFM from motion, real-time positioning and map construction SLAM algorithm .
  38. 根据据权利要求1-37中任一项所述的方法,其特征在于,所述输出影像包括正射影像。The method of any of claims 1 to 37, wherein the output image comprises an orthophoto.
  39. 根据权利要求1-37中任一项所述的方法,其特征在于,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔与所述飞行器相对于地面的飞行高度关联。 The method according to any one of claims 1 to 3, wherein the photographing interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft with respect to the ground.
  40. 根据权利要求39所述的方法,其特征在于,当所述飞行器相对于地表以固定的相对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上分别以相同的拍摄间隔进行拍摄。The method according to claim 39, wherein said first photographing device and said second photographing device respectively photograph in the same direction in a horizontal direction when said aircraft is flying at a fixed relative height with respect to the earth's surface Shoot at intervals.
  41. 根据权利要求39所述的方法,其特征在于,当所述飞行器相对于地表高度改变时,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔改变。The method according to claim 39, wherein a photographing interval of said first photographing device and said second photographing device changes when said aircraft changes in height with respect to a surface.
  42. 根据权利要求41所述的方法,其特征在于,当所述飞行器以统一的绝对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影像重叠率,以及所述飞行器与地表的相对高度关联。The method according to claim 41, wherein said first photographing device and said second photographing device photograph in a horizontal direction at time-lapse photographing intervals when said aircraft is flying at a uniform absolute height Wherein the shooting interval is associated with a pre-configured image overlay rate and a relative height of the aircraft to the surface.
  43. 一种输出影像生成方法,其特征在于,包括:An output image generating method, comprising:
    获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值,其中,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔与所述飞行器相对于地面的飞行高度关联;Acquiring a first image obtained by the first photographing device carried by the aircraft, and obtaining a second image obtained by the second photographing device mounted on the aircraft, wherein the FOV of the first photographing device is greater than or equal to a preset threshold, wherein a photographing interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft with respect to the ground;
    基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态;Calculating a position and a posture of the first photographing device when the first image is captured and a position and a posture of the second photographing device when photographing the second image, based on a preset algorithm;
    基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;Generating a terrain surface based on the position and posture of the first image and the first photographing device when the first image is captured;
    基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。And based on the position and posture of the second photographing device when the second image is captured, the second image is projected and stitched on the terrain surface to obtain an output image.
  44. 根据权利要求43所述的方法,其特征在于,当所述飞行器相对于地表以固定的相对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以相同的拍摄间隔进行拍摄。The method according to claim 43, wherein said first photographing device and said second photographing device are at the same photographing interval in a horizontal direction when said aircraft is flying at a fixed relative height with respect to the earth's surface Take a picture.
  45. 根据权利要求43所述的方法,其特征在于,当所述飞行器相对于地表高度改变时,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔改变。The method according to claim 43, wherein a photographing interval of said first photographing device and said second photographing device changes when said aircraft changes in height with respect to a surface.
  46. 根据权利要求45所述的方法,其特征在于,当所述飞行器以统一的绝对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影 像重叠率,以及所述飞行器与地表的相对高度关联。The method according to claim 45, wherein said first photographing device and said second photographing device photograph in a horizontal direction at time-lapse photographing intervals when said aircraft is flying at a uniform absolute height Where the shooting interval and the pre-configured shadow Like the overlap rate, and the relative height of the aircraft to the surface.
  47. 一种地面站,其特征在于,包括:通信接口、一个或多个处理器;所述一个或多个处理器单独或协同工作,所述通信接口和所述处理器连接;A ground station, comprising: a communication interface, one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
    所述通信接口用于:获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值;The communication interface is configured to: acquire a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where the FOV of the first photographing device is greater than or Equal to the preset threshold;
    所述处理器用于:基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态;The processor is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured, and a position and a posture of the second photographing device when the second image is photographed;
    所述处理器用于:基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;The processor is configured to generate a terrain surface based on a position and a posture of the first image and the first photographing device when the first image is captured;
    所述处理器还用于:基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。The processor is further configured to: perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
  48. 根据权利要求47所述的地面站,其特征在于,所述第一拍摄设备的FOV大于所述第二拍摄设备的FOV。The ground station according to claim 47, wherein the FOV of the first photographing device is greater than the FOV of the second photographing device.
  49. 根据权利要求48所述的地面站,其特征在于,所述第二拍摄设备的FOV小于所述预设阈值。The ground station of claim 48, wherein the FOC of the second photographing device is less than the predetermined threshold.
  50. 根据权利要求49所述的地面站,其特征在于,所述通信接口用于:获取飞行器搭载的第一拍摄设备拍摄获得的第一可见光影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二可见光影像,所述第一拍摄设备和所述第二拍摄设备同步拍摄。The ground station according to claim 49, wherein the communication interface is configured to: acquire a first visible light image obtained by the first photographing device mounted on the aircraft, and acquire a second photographed by the second photographing device mounted on the aircraft The two visible light images are captured by the first photographing device and the second photographing device.
  51. 根据权利要求50所述的地面站,其特征在于,所述处理器,用于:The ground station according to claim 50, wherein said processor is configured to:
    基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态;Calculating a position and a posture of the first photographing device when the first visible light image is captured based on a preset image processing algorithm;
    基于预先标定的所述第一拍摄设备和所述第二拍摄设备的相对位置关系,计算所述第二拍摄设备在拍摄所述第二可见光影像时的位置和姿态。 And calculating a position and a posture of the second photographing device when the second visible light image is captured based on a relative positional relationship between the first photographing device and the second photographing device that is pre-calibrated.
  52. 根据权利要求50所述的地面站,其特征在于,所述处理器用于:基于对所述第一可见光影像在所述地形表面上的投影进行拼接时采用的拼接线,对所述第二可见光影像在所述地形表面上的投影进行拼接,获得所述第二拍摄设备对应的可见光输出影像。The ground station according to claim 50, wherein the processor is configured to: align a second visible light based on a splicing line used when splicing the projection of the first visible light image on the terrain surface The projection of the image on the surface of the terrain is spliced to obtain a visible light output image corresponding to the second photographing device.
  53. 根据权利要求50所述的地面站,其特征在于,所述处理器用于:The ground station of claim 50 wherein said processor is operative to:
    基于所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;Performing dense matching based on the position and posture of the first photographing device when photographing the first visible light image to generate a corresponding dense point cloud or semi-dense point cloud;
    基于所述第二可见光影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;Constructing a cost function based on a projection of the second visible light image on the terrain surface and a point cloud generated by the dense matching;
    基于所述代价函数对所述第二可见光影像在所述地形表面上的投影进行拼接,获得所述第二拍摄设备对应的可见光输出影像。The projection of the second visible light image on the terrain surface is spliced based on the cost function, and the visible light output image corresponding to the second imaging device is obtained.
  54. 根据权利要求50所述的地面站,其特征在于,所述处理器还用于:基于所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态,将所述第一可见光影像投影到所述地形表面。The ground station according to claim 50, wherein the processor is further configured to: display the first visible light image based on a position and a posture of the first photographing device when the first visible light image is captured Projected onto the surface of the terrain.
  55. 根据权利要求52或53所述的地面站,其特征在于,所述处理器还用于:对所述第二可见光影像在所述地形表面上的投影进行正射处理。The ground station according to claim 52 or 53, wherein the processor is further configured to orthographically process the projection of the second visible light image on the terrain surface.
  56. 根据权利要求47-55中任一项所述的地面站,其特征在于,所述第一拍摄设备为广角相机,所述第二拍摄设备为长焦相机。A ground station according to any one of claims 47 to 55, wherein the first photographing device is a wide-angle camera and the second photographing device is a telephoto camera.
  57. 根据权利要求47所述的地面站,其特征在于,所述通信接口用于:获取飞行器搭载的第一拍摄设备拍摄获得的可见光影像,以及所述所述第二拍摄设备拍摄获得的近红外影像,所述第一拍摄设备和所述第一拍摄设备同步拍摄。The ground station according to claim 47, wherein the communication interface is configured to: acquire a visible light image obtained by a first photographing device mounted on the aircraft, and a near infrared image obtained by the second photographing device The first photographing device and the first photographing device are photographed simultaneously.
  58. 根据权利要求57所述的地面站,其特征在于,所述处理器用于:A ground station according to claim 57, wherein said processor is for:
    基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态;Calculating a position and a posture of the first photographing device when the visible light image is captured based on a preset image processing algorithm;
    基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述近红外影像时的位置和姿 态。Calculating a position and a posture of the second photographing device when photographing the near-infrared image based on a relative positional relationship between the first photographing device and the second photographing device that is pre-calibrated state.
  59. 根据权利要求57所述的地面站,其特征在于,所述处理器用于:A ground station according to claim 57, wherein said processor is for:
    对所述可见光影像在所述地形表面上的投影进行拼接处理,获得可见光输出影像;Performing a splicing process on the projection of the visible light image on the surface of the terrain to obtain a visible light output image;
    基于对所述可见光影像在所述地形表面上的投影进行拼接时采用的拼接线,对所述近红外影像在所述地形表面上的投影进行拼接,获得近红外输出影像。The projection of the near-infrared image on the surface of the terrain is spliced based on a splicing line used for splicing the projection of the visible light image on the surface of the terrain to obtain a near-infrared output image.
  60. 根据权利要求59所述的地面站,其特征在于,所述地面站还包括:显示组件,所述显示组件与所述处理器通信连接;A ground station according to claim 59, wherein said ground station further comprises: a display component, said display component being communicatively coupled to said processor;
    所述显示组件用于:显示所述可见光输出影像和/或所述近红外输出影像。The display component is configured to: display the visible light output image and/or the near infrared output image.
  61. 根据权利要求59所述的地面站,其特征在于,所述处理器还用于:基于所述可见光输出影像和所述近红外输出影像,计算植被覆盖指数NDVI和/或强型植被指数EVI,并基于计算获得的NDVI和/或EVI,绘制相应的指数图。The ground station according to claim 59, wherein the processor is further configured to: calculate a vegetation coverage index NDVI and/or a strong vegetation index EVI based on the visible light output image and the near-infrared output image, And based on the calculated NDVI and / or EVI, draw the corresponding index map.
  62. 根据权利要求61所述的地面站,其特征在于,显示组件用于:显示所述指数图。A ground station according to claim 61, wherein the display component is operative to: display the index map.
  63. 根据权利要求62所述的地面站,其特征在于,所述处理器还用于:The ground station of claim 62, wherein the processor is further configured to:
    基于所述指数图,分析植被的生长状况,并输出分析结果。Based on the index map, the growth state of the vegetation is analyzed, and the analysis result is output.
  64. 根据权利要求57所述的地面站,其特征在于,所述处理器用于:A ground station according to claim 57, wherein said processor is for:
    基于所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;Performing dense matching based on the position and posture of the first photographing device when capturing the visible light image, and generating a corresponding dense point cloud or semi-dense point cloud;
    基于所述近红外影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;Constructing a cost function based on a projection of the near-infrared image on the surface of the terrain and a point cloud generated by the dense matching;
    基于所述代价函数对所述近红外影像在所述地形表面上的投影进行拼接,获得近红外输出影像。A projection of the near-infrared image on the surface of the terrain is spliced based on the cost function to obtain a near-infrared output image.
  65. 根据权利要求57-64中任一项所述的地面站,其特征在于,所述 第一拍摄设备为广角相机,所述第二拍摄设备为近红外相机。A ground station according to any one of claims 57-64, wherein said said The first photographing device is a wide-angle camera and the second photographing device is a near-infrared camera.
  66. 根据权利要求47所述的地面站,其特征在于,所述通信接口用于:获取飞行器搭载的第一拍摄设备拍摄获得的可见光影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的红外影像,所述第一拍摄设备和所述第二拍摄设备同步拍摄。The ground station according to claim 47, wherein the communication interface is configured to: acquire a visible light image obtained by the first photographing device mounted on the aircraft, and acquire an infrared image obtained by the second photographing device mounted on the aircraft, The first photographing device and the second photographing device are simultaneously photographed.
  67. 根据权利要求66所述的地面站,其特征在于,所述处理器用于:The ground station of claim 66, wherein said processor is configured to:
    基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态;Calculating a position and a posture of the first photographing device when the visible light image is captured based on a preset image processing algorithm;
    基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述红外影像时的位置和姿态。And calculating a position and a posture of the second photographing device when the infrared image is captured based on a relative positional relationship between the first photographing device and the second photographing device that is pre-calibrated.
  68. 根据权利要求66所述的地面站,其特征在于,所述处理器用于:The ground station of claim 66, wherein said processor is configured to:
    对所述可见光影像在所述地形表面上的投影进行拼接处理,获得可见光输出影像;Performing a splicing process on the projection of the visible light image on the surface of the terrain to obtain a visible light output image;
    基于对所述可见光影像在所述地形表面上的投影进行拼接时采用的拼接线,对所述红外影像的投影进行拼接,获得红外输出影像。And projecting the projection of the infrared image based on a splicing line used for splicing the projection of the visible light image on the surface of the terrain to obtain an infrared output image.
  69. 根据权利要求68所述的地面站,其特征在于,显示组件用于:显示所述红外输出影像和/或所述可见光输出影像。A ground station according to claim 68, wherein the display component is operative to: display the infrared output image and/or the visible light output image.
  70. 根据权利要求68所述的地面站,其特征在于,所述处理器用于:The ground station of claim 68 wherein said processor is operative to:
    在所述第二拍摄设备拍摄获得的红外影像或者所述红外输出影像中识别出热源物体的位置。A position of the heat source object is recognized in the infrared image obtained by the second photographing device or the infrared output image.
  71. 根据权利要求70所述的地面站,其特征在于,显示组件还用于:The ground station of claim 70 wherein the display component is further configured to:
    显示所述热源物体在所述红外影像或所述红外输出影像中的位置。A position of the heat source object in the infrared image or the infrared output image is displayed.
  72. 根据权利要求70所述的地面站,其特征在于,所述热源物体包括电力线。A ground station according to claim 70, wherein said heat source object comprises a power line.
  73. 根据权利要求72所述的地面站,其特征在于,所述处理器用 于:基于所述第二拍摄设备在拍摄所述红外影像时的位置和姿态,以及预设电力线数学模型,对识别出的电力线进行建模,形成电力线图层;A ground station according to claim 72, wherein said processor is And: forming a power line layer based on the position and posture of the second photographing device when the infrared image is captured, and a preset power line mathematical model, and modeling the identified power line;
    显示组件用于:在所述可见光输出影像上叠加显示所述电力线图层。The display component is configured to: superimpose and display the power line layer on the visible light output image.
  74. 根据权利要求66所述的地面站,其特征在于,所述处理器用于:The ground station of claim 66, wherein said processor is configured to:
    基于所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;Performing dense matching based on the position and posture of the first photographing device when capturing the visible light image, and generating a corresponding dense point cloud or semi-dense point cloud;
    基于所述红外影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;Constructing a cost function based on a projection of the infrared image on the surface of the terrain and a point cloud generated by the dense matching;
    基于所述代价函数对所述红外影像在所述地形表面上的投影进行拼接,获得红外输出影像。The projection of the infrared image on the surface of the terrain is spliced based on the cost function to obtain an infrared output image.
  75. 根据权利要求66-74中任一项所述的地面站,其特征在于,所述第一拍摄设备为广角相机,所述第二拍摄设备为红外相机。A ground station according to any one of claims 66-74, wherein the first photographing device is a wide-angle camera and the second photographing device is an infrared camera.
  76. 根据权利要求47所述的地面站,其特征在于,所述处理器用于:The ground station according to claim 47, wherein said processor is configured to:
    基于预先设定的像控点的GPS信息,将所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,以及所述第二拍摄设备在拍摄所述第二影像时的位置和姿,转换为世界坐标系下的位置和姿态。Positioning and posture of the first photographing device when photographing the first image, and a position of the second photographing device when photographing the second image, based on GPS information of a preset image control point The pose is converted to the position and posture in the world coordinate system.
  77. 根据权利要求76所述的地面站,其特征在于,所述处理器用于:The ground station of claim 76 wherein said processor is operative to:
    基于预先设定的像控点的GPS信息,确定所述像控点在所述第一拍摄设备拍摄获得的第一影像中的相对位置;Determining a relative position of the image control point in the first image captured by the first photographing device based on GPS information of a preset image control point;
    基于所述像控点在所述第一影像中的相对位置,以及所述像控点的GPS信息,将所述第一拍摄设备在拍摄所述第一影像时的位置和姿态转换为世界坐标系下的位置和姿态;Converting a position and a posture of the first photographing device when the first image is captured into a world coordinate based on a relative position of the image control point in the first image and GPS information of the image control point Position and posture;
    基于预先标定的第一拍摄设备和所述第二拍摄设备之间的相对位置关系,将所述第二拍摄设备在拍摄所述第二影像时的位置和姿态转换为世界坐标系下的位置和姿态。Converting the position and posture of the second photographing device when photographing the second image into a position in the world coordinate system and based on a relative positional relationship between the first photographing device and the second photographing device that are pre-calibrated attitude.
  78. 根据权利要求77所述的地面站,其特征在于,显示组件用于: A ground station according to claim 77, wherein the display component is for:
    显示所述像控点在所述第一影像上的所述相对位置。Displaying the relative position of the image control point on the first image.
  79. 根据权利要求47所述的地面站,其特征在于,所述处理器用于:The ground station according to claim 47, wherein said processor is configured to:
    以预先设定的像控点作为约束条件,采用运动恢复结构SFM算法计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态;Calculating a position and a posture of the first photographing device when the first image is captured by using a motion recovery structure SFM algorithm with a preset image control point as a constraint condition;
    基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述第二影像时的位置和姿态。And calculating a position and a posture of the second photographing device when the second image is captured based on a relative positional relationship between the first photographing device and the second photographing device that is pre-calibrated.
  80. 根据权利要求47所述的地面站,其特征在于,所述处理器用于:The ground station according to claim 47, wherein said processor is configured to:
    基于所述第一拍摄设备在拍摄所述第一影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;Performing dense matching based on the position and posture of the first photographing device when photographing the first image, and generating a corresponding dense point cloud or semi-dense point cloud;
    基于稠密匹配生成的点云拟合形成地形表面。A point cloud fit based on dense matching forms a terrain surface.
  81. 根据权利要求80所述的地面站,其特征在于,所述处理器用于:The ground station of claim 80 wherein said processor is operative to:
    从稠密匹配生成的点云中提取地面点;Extracting ground points from a point cloud generated by dense matching;
    基于提取出的地面点拟合形成地形表面。A terrain surface is formed based on the extracted ground point fit.
  82. 根据权利要求47所述的地面站,其特征在于,所述处理器还包括:The ground station of claim 47, wherein the processor further comprises:
    对所述第二影像在所述表面上的投影进行全局色彩和/或亮度调整。Global color and/or brightness adjustment is performed on the projection of the second image on the surface.
  83. 根据权利要求51或58或67所述的地面站,其特征在于,所述预设的图像处理算法包括如下任意一种:空中三角测量、从运动恢复结构SFM的算法、即时定位与地图构建SLAM算法。The ground station according to claim 51 or 58 or 67, wherein said preset image processing algorithm comprises any one of the following: aerial triangulation, algorithm for recovering structure SFM from motion, real-time positioning and map construction SLAM algorithm.
  84. 根据据权利要求47-83中任一项所述的地面站,其特征在于,所述输出影像包括正射影像。A ground station according to any of claims 47-83, wherein the output image comprises an orthophoto.
  85. 根据权利要求47-83中任一项所述的地面站,其特征在于,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔与所述飞行器相对于地面的飞行高度关联。A ground station according to any one of claims 47-83, wherein the photographing interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft with respect to the ground.
  86. 根据权利要求85所述的地面站,其特征在于,当所述飞行器相对于地表以固定的相对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上分别以相同的拍摄间隔进行拍摄。The ground station according to claim 85, wherein said first photographing device and said second photographing device respectively have the same in a horizontal direction when said aircraft is flying at a fixed relative height with respect to the earth's surface Shooting at the shooting interval.
  87. 根据权利要求85所述的地面站,其特征在于,当所述飞行器相 对于地表高度改变时,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔改变。A ground station according to claim 85, wherein said aircraft phase The photographing interval of the first photographing device and the second photographing device changes when the surface height is changed.
  88. 根据权利要求87所述的地面站,其特征在于,当所述飞行器以统一的绝对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影像重叠率,以及所述飞行器与地表的相对高度关联。A ground station according to claim 87, wherein said first photographing device and said second photographing device are operated in a horizontal direction at time-varying photographing intervals when said aircraft is flying at a uniform absolute height Shooting, wherein the shooting interval is associated with a pre-configured image overlap rate and a relative height of the aircraft to the surface.
  89. 一种地面站,其特征在于,包括:通信接口、一个或多个处理器;所述一个或多个处理器单独或协同工作,所述通信接口和所述处理器连接;A ground station, comprising: a communication interface, one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
    所述通信接口用于:获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值,其中,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔与所述飞行器相对于地面的飞行高度关联;The communication interface is configured to: acquire a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where the FOV of the first photographing device is greater than or Is equal to a preset threshold, wherein a photographing interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft with respect to the ground;
    所述处理器用于:基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态;The processor is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured, and a position and a posture of the second photographing device when the second image is photographed;
    所述处理器用于:基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;The processor is configured to generate a terrain surface based on a position and a posture of the first image and the first photographing device when the first image is captured;
    所述处理器用于:基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。The processor is configured to: perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
  90. 根据权利要求89所述的地面站,其特征在于,当所述飞行器相对于地表以固定的相对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以相同的拍摄间隔进行拍摄。A ground station according to claim 89, wherein said first photographing device and said second photographing device are photographed in the same direction in a horizontal direction when said aircraft is flying at a fixed relative height with respect to the earth's surface. Shoot at intervals.
  91. 根据权利要求89所述的地面站,其特征在于,当所述飞行器相对于地表高度改变时,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔改变。The ground station according to claim 89, wherein a photographing interval of said first photographing device and said second photographing device changes when said aircraft changes in height with respect to a surface.
  92. 根据权利要求91所述的地面站,其特征在于,当所述飞行器以统一的绝对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平 方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影像重叠率,以及所述飞行器与地表的相对高度关联。The ground station according to claim 91, wherein said first photographing device and said second photographing device are horizontal when said aircraft is flying at a uniform absolute height The photographing is performed in a temporally varying photographing interval, wherein the photographing interval is associated with a pre-configured image overlap rate and a relative height of the aircraft to the surface.
  93. 一种飞行器控制器,其特征在于,包括:通信接口、一个或多个处理器;所述一个或多个处理器单独或协同工作,所述通信接口和所述处理器连接;An aircraft controller, comprising: a communication interface, one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
    所述通信接口用于:获取飞行器搭载的第一拍摄设备拍摄获得的第一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值;The communication interface is configured to: acquire a first image obtained by the first photographing device mounted on the aircraft, and acquire a second image obtained by the second photographing device mounted on the aircraft, where the FOV of the first photographing device is greater than or Equal to the preset threshold;
    所述处理器用于:基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态;The processor is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured, and a position and a posture of the second photographing device when the second image is photographed;
    所述处理器用于:基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;The processor is configured to generate a terrain surface based on a position and a posture of the first image and the first photographing device when the first image is captured;
    所述处理器还用于:基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。The processor is further configured to: perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
  94. 根据权利要求93所述的飞行器控制器,其特征在于,所述第一拍摄设备的FOV大于所述第二拍摄设备的FOV。The aircraft controller of claim 93, wherein the FOV of the first photographing device is greater than the FOV of the second photographing device.
  95. 根据权利要求94所述的飞行器控制器,其特征在于,所述第二拍摄设备的FOV小于所述预设阈值。The aircraft controller of claim 94, wherein the FOC of the second photographing device is less than the predetermined threshold.
  96. 根据权利要求95所述的飞行器控制器,其特征在于,所述通信接口用于:获取飞行器搭载的第一拍摄设备拍摄获得的第一可见光影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二可见光影像,所述第一拍摄设备和所述第二拍摄设备同步拍摄。The aircraft controller according to claim 95, wherein the communication interface is configured to: acquire a first visible light image obtained by the first photographing device mounted on the aircraft, and acquire a second photographing device mounted on the aircraft. The second visible light image is captured by the first photographing device and the second photographing device.
  97. 根据权利要求96所述的飞行器控制器,其特征在于,所述处理器,用于:The aircraft controller according to claim 96, wherein said processor is configured to:
    基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态;Calculating a position and a posture of the first photographing device when the first visible light image is captured based on a preset image processing algorithm;
    基于预先标定的所述第一拍摄设备和所述第二拍摄设备的相对位置关系,计算所述第二拍摄设备在拍摄所述第二可见光影像时的位置和姿 态。Calculating a position and a posture of the second photographing device when photographing the second visible light image based on a relative positional relationship between the first photographing device and the second photographing device that is pre-calibrated state.
  98. 根据权利要求96所述的飞行器控制器,其特征在于,所述处理器用于:基于对所述第一可见光影像在所述地形表面上的投影进行拼接时采用的拼接线,对所述第二可见光影像在所述地形表面上的投影进行拼接,获得所述第二拍摄设备对应的可见光输出影像。The aircraft controller according to claim 96, wherein said processor is configured to: splicing a line based on a projection of said first visible light image on said terrain surface, said second The projection of the visible light image on the surface of the terrain is spliced to obtain a visible light output image corresponding to the second photographing device.
  99. 根据权利要求96所述的飞行器控制器,其特征在于,所述处理器用于:The aircraft controller of claim 96 wherein said processor is operative to:
    基于所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;Performing dense matching based on the position and posture of the first photographing device when photographing the first visible light image to generate a corresponding dense point cloud or semi-dense point cloud;
    基于所述第二可见光影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;Constructing a cost function based on a projection of the second visible light image on the terrain surface and a point cloud generated by the dense matching;
    基于所述代价函数对所述第二可见光影像在所述地形表面上的投影进行拼接,获得所述第二拍摄设备对应的可见光输出影像。The projection of the second visible light image on the terrain surface is spliced based on the cost function, and the visible light output image corresponding to the second imaging device is obtained.
  100. 根据权利要求96所述的飞行器控制器,其特征在于,所述处理器还用于:基于所述第一拍摄设备在拍摄所述第一可见光影像时的位置和姿态,将所述第一可见光影像投影到所述地形表面。The aircraft controller according to claim 96, wherein the processor is further configured to: based on a position and a posture of the first photographing device when photographing the first visible light image, the first visible light The image is projected onto the surface of the terrain.
  101. 根据权利要求98或99所述的飞行器控制器,其特征在于,所述处理器还用于:对所述第二可见光影像在所述地形表面上的投影进行正射处理。The aircraft controller according to claim 98 or claim 99, wherein the processor is further configured to orthographically process the projection of the second visible light image on the terrain surface.
  102. 根据权利要求93-101中任一项所述的飞行器控制器,其特征在于,所述第一拍摄设备为广角相机,所述第二拍摄设备为长焦相机。The aircraft controller according to any one of claims 93-101, wherein the first photographing device is a wide-angle camera and the second photographing device is a telephoto camera.
  103. 根据权利要求93所述的飞行器控制器,其特征在于,所述通信接口用于:获取飞行器搭载的第一拍摄设备拍摄获得的可见光影像,以及所述所述第二拍摄设备拍摄获得的近红外影像,所述第一拍摄设备和所述第一拍摄设备同步拍摄。The aircraft controller according to claim 93, wherein the communication interface is configured to: acquire a visible light image obtained by the first photographing device mounted on the aircraft, and a near infrared image obtained by the second photographing device Image, the first photographing device and the first photographing device are simultaneously photographed.
  104. 根据权利要求103所述的飞行器控制器,其特征在于,所述处理器用于:The aircraft controller of claim 103 wherein said processor is operative to:
    基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态;Calculating a position and a posture of the first photographing device when the visible light image is captured based on a preset image processing algorithm;
    基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对 位置关系,计算所述第二拍摄设备在拍摄所述近红外影像时的位置和姿态。Relating between the first photographing device and the second photographing device based on pre-calibration a positional relationship, calculating a position and a posture of the second photographing device when photographing the near-infrared image.
  105. 根据权利要求103所述的飞行器控制器,其特征在于,所述处理器用于:The aircraft controller of claim 103 wherein said processor is operative to:
    对所述可见光影像在所述地形表面上的投影进行拼接处理,获得可见光输出影像;Performing a splicing process on the projection of the visible light image on the surface of the terrain to obtain a visible light output image;
    基于对所述可见光影像在所述地形表面上的投影进行拼接时采用的拼接线,对所述近红外影像在所述地形表面上的投影进行拼接,获得近红外输出影像。The projection of the near-infrared image on the surface of the terrain is spliced based on a splicing line used for splicing the projection of the visible light image on the surface of the terrain to obtain a near-infrared output image.
  106. 根据权利要求105所述的飞行器控制器,其特征在于,所述处理器还用于:基于所述可见光输出影像和所述近红外输出影像,计算植被覆盖指数NDVI和/或强型植被指数EVI,并基于计算获得的NDVI和/或EVI,绘制相应的指数图。The aircraft controller according to claim 105, wherein the processor is further configured to: calculate a vegetation coverage index NDVI and/or a strong vegetation index EVI based on the visible light output image and the near infrared output image And based on the calculated NDVI and / or EVI, draw the corresponding index map.
  107. 根据权利要求106所述的飞行器控制器,其特征在于,所述处理器还用于:The aircraft controller of claim 106, wherein the processor is further configured to:
    基于所述指数图,分析植被的生长状况,并输出分析结果。Based on the index map, the growth state of the vegetation is analyzed, and the analysis result is output.
  108. 根据权利要求103所述的飞行器控制器,其特征在于,所述处理器用于:The aircraft controller of claim 103 wherein said processor is operative to:
    基于所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;Performing dense matching based on the position and posture of the first photographing device when capturing the visible light image, and generating a corresponding dense point cloud or semi-dense point cloud;
    基于所述近红外影像在所述地形表面上的投影,以及所述稠密匹配生成的点云,构建代价函数;Constructing a cost function based on a projection of the near-infrared image on the surface of the terrain and a point cloud generated by the dense matching;
    基于所述代价函数对所述近红外影像在所述地形表面上的投影进行拼接,获得近红外输出影像。A projection of the near-infrared image on the surface of the terrain is spliced based on the cost function to obtain a near-infrared output image.
  109. 根据权利要求103-108中任一项所述的飞行器控制器,其特征在于,所述第一拍摄设备为广角相机,所述第二拍摄设备为近红外相机。The aircraft controller according to any one of claims 103 to 108, wherein the first photographing device is a wide-angle camera and the second photographing device is a near-infrared camera.
  110. 根据权利要求93所述的飞行器控制器,其特征在于,所述通信接口用于:获取飞行器搭载的第一拍摄设备拍摄获得的可见光影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的红外影像,所述第一拍摄 设备和所述第二拍摄设备同步拍摄。The aircraft controller according to claim 93, wherein the communication interface is configured to: acquire a visible light image obtained by the first photographing device mounted on the aircraft, and acquire an infrared image obtained by the second photographing device mounted on the aircraft. , the first shot The device and the second photographing device are photographed simultaneously.
  111. 根据权利要求110所述的飞行器控制器,其特征在于,所述处理器用于:The aircraft controller of claim 110 wherein said processor is operative to:
    基于预设的图像处理算法,计算所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态;Calculating a position and a posture of the first photographing device when the visible light image is captured based on a preset image processing algorithm;
    基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述红外影像时的位置和姿态。And calculating a position and a posture of the second photographing device when the infrared image is captured based on a relative positional relationship between the first photographing device and the second photographing device that is pre-calibrated.
  112. 根据权利要求110所述的飞行器控制器,其特征在于,所述处理器用于:The aircraft controller of claim 110 wherein said processor is operative to:
    对所述可见光影像在所述地形表面上的投影进行拼接处理,获得可见光输出影像;Performing a splicing process on the projection of the visible light image on the surface of the terrain to obtain a visible light output image;
    基于对所述可见光影像在所述地形表面上的投影进行拼接时采用的拼接线,对所述红外影像的投影进行拼接,获得红外输出影像。And projecting the projection of the infrared image based on a splicing line used for splicing the projection of the visible light image on the surface of the terrain to obtain an infrared output image.
  113. 根据权利要求112所述的飞行器控制器,其特征在于,所述处理器用于:The aircraft controller of claim 112 wherein said processor is operative to:
    在所述第二拍摄设备拍摄获得的红外影像或者所述红外输出影像中识别出热源物体的位置。A position of the heat source object is recognized in the infrared image obtained by the second photographing device or the infrared output image.
  114. 根据权利要求113所述的飞行器控制器,其特征在于,所述热源物体包括电力线。The aircraft controller of claim 113 wherein said heat source object comprises a power line.
  115. 根据权利要求114所述的飞行器控制器,其特征在于,所述处理器用于:The aircraft controller of claim 114, wherein the processor is configured to:
    基于所述第二拍摄设备在拍摄所述红外影像时的位置和姿态,以及预设电力线数学模型,对识别出的电力线进行建模,形成电力线图层;Determining the identified power line based on the position and posture of the second photographing device when the infrared image is captured, and the preset power line mathematical model to form a power line layer;
    将所述电力线图层叠加在所述可见光输出影像上。The power line layer is superimposed on the visible light output image.
  116. 根据权利要求110所述的飞行器控制器,其特征在于,所述处理器用于:The aircraft controller of claim 110 wherein said processor is operative to:
    基于所述第一拍摄设备在拍摄所述可见光影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云;Performing dense matching based on the position and posture of the first photographing device when capturing the visible light image, and generating a corresponding dense point cloud or semi-dense point cloud;
    基于所述红外影像在所述地形表面上的投影,以及所述稠密匹配生 成的点云,构建代价函数;Projecting a projection on the surface of the terrain based on the infrared image, and the dense matching a point cloud, constructing a cost function;
    基于所述代价函数对所述红外影像在所述地形表面上的投影进行拼接,获得红外输出影像。The projection of the infrared image on the surface of the terrain is spliced based on the cost function to obtain an infrared output image.
  117. 根据权利要求110-116中任一项所述的飞行器控制器,其特征在于,所述第一拍摄设备为广角相机,所述第二拍摄设备为红外相机。The aircraft controller according to any one of claims 110 to 116, wherein the first photographing device is a wide-angle camera and the second photographing device is an infrared camera.
  118. 根据权利要求93所述的飞行器控制器,其特征在于,所述处理器用于:The aircraft controller of claim 93 wherein said processor is operative to:
    基于预先设定的像控点的GPS信息,将所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,以及所述第二拍摄设备在拍摄所述第二影像时的位置和姿,转换为世界坐标系下的位置和姿态。Positioning and posture of the first photographing device when photographing the first image, and a position of the second photographing device when photographing the second image, based on GPS information of a preset image control point The pose is converted to the position and posture in the world coordinate system.
  119. 根据权利要求118所述的飞行器控制器,其特征在于,所述处理器用于:The aircraft controller of claim 118 wherein said processor is operative to:
    基于预先设定的像控点的GPS信息,确定所述像控点在所述第一拍摄设备拍摄获得的第一影像中的相对位置;Determining a relative position of the image control point in the first image captured by the first photographing device based on GPS information of a preset image control point;
    基于所述像控点在所述第一影像中的相对位置,以及所述像控点的GPS信息,将所述第一拍摄设备在拍摄所述第一影像时的位置和姿态转换为世界坐标系下的位置和姿态;Converting a position and a posture of the first photographing device when the first image is captured into a world coordinate based on a relative position of the image control point in the first image and GPS information of the image control point Position and posture;
    基于预先标定的第一拍摄设备和所述第二拍摄设备之间的相对位置关系,将所述第二拍摄设备在拍摄所述第二影像时的位置和姿态转换为世界坐标系下的位置和姿态。Converting the position and posture of the second photographing device when photographing the second image into a position in the world coordinate system and based on a relative positional relationship between the first photographing device and the second photographing device that are pre-calibrated attitude.
  120. 根据权利要求93所述的飞行器控制器,其特征在于,所述处理器用于:The aircraft controller of claim 93 wherein said processor is operative to:
    以预先设定的像控点作为约束条件,采用运动恢复结构SFM算法计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态;Calculating a position and a posture of the first photographing device when the first image is captured by using a motion recovery structure SFM algorithm with a preset image control point as a constraint condition;
    基于预先标定的所述第一拍摄设备和所述第二拍摄设备之间的相对位置关系,计算所述第二拍摄设备在拍摄所述第二影像时的位置和姿态。And calculating a position and a posture of the second photographing device when the second image is captured based on a relative positional relationship between the first photographing device and the second photographing device that is pre-calibrated.
  121. 根据权利要求93所述的飞行器控制器,其特征在于,所述处理器用于:The aircraft controller of claim 93 wherein said processor is operative to:
    基于所述第一拍摄设备在拍摄所述第一影像时的位置和姿态进行稠密匹配,生成相应的稠密点云或半稠密点云; Performing dense matching based on the position and posture of the first photographing device when photographing the first image, and generating a corresponding dense point cloud or semi-dense point cloud;
    基于稠密匹配生成的点云拟合形成地形表面。A point cloud fit based on dense matching forms a terrain surface.
  122. 根据权利要求121所述的飞行器控制器,其特征在于,所述处理器用于:The aircraft controller of claim 121 wherein said processor is operative to:
    从稠密匹配生成的点云中提取地面点;Extracting ground points from a point cloud generated by dense matching;
    基于提取出的地面点拟合形成地形表面。A terrain surface is formed based on the extracted ground point fit.
  123. 根据权利要求93所述的飞行器控制器,其特征在于,所述飞行器控制器还包括:The aircraft controller of claim 93, wherein the aircraft controller further comprises:
    对所述第二影像在所述表面上的投影进行全局色彩和/或亮度调整。Global color and/or brightness adjustment is performed on the projection of the second image on the surface.
  124. 根据权利要求97或104或111所述的飞行器控制器,其特征在于,所述预设的图像处理算法包括如下任意一种:空中三角测量、从运动恢复结构SFM的算法、即时定位与地图构建SLAM算法。The aircraft controller according to claim 97 or 104 or 111, wherein the preset image processing algorithm comprises any one of the following: aerial triangulation, algorithm for recovering structure SFM from motion, real-time positioning and map construction SLAM algorithm.
  125. 根据据权利要求93-124中任一项所述的飞行器控制器,其特征在于,所述输出影像包括正射影像。The aircraft controller of any of claims 93-124, wherein the output image comprises an orthophoto.
  126. 根据权利要求93-124中任一项所述的飞行器控制器,其特征在于,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔与所述飞行器相对于地面的飞行高度关联。The aircraft controller according to any one of claims 93-124, wherein the photographing interval of the first photographing device and the second photographing device is associated with a flying height of the aircraft with respect to the ground.
  127. 根据权利要求126所述的飞行器控制器,其特征在于,当所述飞行器相对于地表以固定的相对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上分别以相同的拍摄间隔进行拍摄。The aircraft controller according to claim 126, wherein said first photographing device and said second photographing device are respectively identical in a horizontal direction when said aircraft is flying at a fixed relative height with respect to the earth's surface The shooting interval is taken.
  128. 根据权利要求126所述的飞行器控制器,其特征在于,当所述飞行器相对于地表高度改变时,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔改变。The aircraft controller according to claim 126, wherein a photographing interval of said first photographing device and said second photographing device changes when said aircraft changes in height with respect to a surface.
  129. 根据权利要求128所述的飞行器控制器,其特征在于,当所述飞行器以统一的绝对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影像重叠率,以及所述飞行器与地表的相对高度关联。The aircraft controller according to claim 128, wherein said first photographing device and said second photographing device are time-variant photographing intervals in a horizontal direction when said aircraft is flying at a uniform absolute height Shooting is performed wherein the shooting interval is associated with a pre-configured image overlay rate and the relative height of the aircraft to the surface.
  130. 一种飞行器控制器,其特征在于,包括:通信接口、一个或多个处理器;所述一个或多个处理器单独或协同工作,所述通信接口和所述处理器连接;An aircraft controller, comprising: a communication interface, one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
    所述通信接口用于:获取飞行器搭载的第一拍摄设备拍摄获得的第 一影像,以及获取飞行器搭载的第二拍摄设备拍摄获得的第二影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值,其中,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔与所述飞行器相对于地面的飞行高度关联;The communication interface is configured to: acquire a first photograph obtained by the first photographing device carried by the aircraft An image obtained by acquiring a second image obtained by the second photographing device mounted on the aircraft, wherein the FOV of the first photographing device is greater than or equal to a preset threshold, wherein the first photographing device and the second photographing The shooting interval of the device is associated with the flying height of the aircraft relative to the ground;
    所述处理器用于:基于预设算法,计算所述第一拍摄设备在拍摄所述第一影像时的位置和姿态和所述第二拍摄设备在拍摄所述第二影像时的位置和姿态;The processor is configured to calculate, according to a preset algorithm, a position and a posture of the first photographing device when the first image is captured, and a position and a posture of the second photographing device when the second image is photographed;
    所述处理器用于:基于所述第一影像和所述第一拍摄设备在拍摄所述第一影像时的位置和姿态,生成地形表面;The processor is configured to generate a terrain surface based on a position and a posture of the first image and the first photographing device when the first image is captured;
    所述处理器用于:基于所述第二拍摄设备在拍摄所述第二影像时的位置和姿态,在所述地形表面上对所述第二影像进行投影和拼接处理,获得输出影像。The processor is configured to: perform projection and splicing processing on the second image on the terrain surface to obtain an output image based on a position and a posture of the second photographing device when the second image is captured.
  131. 根据权利要求130所述的飞行器控制器,其特征在于,当所述飞行器相对于地表以固定的相对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以相同的拍摄间隔进行拍摄。The aircraft controller according to claim 130, wherein said first photographing device and said second photographing device are identical in a horizontal direction when said aircraft is flying at a fixed relative height with respect to a surface Shooting at the shooting interval.
  132. 根据权利要求130所述的飞行器控制器,其特征在于,当所述飞行器相对于地表高度改变时,所述第一拍摄设备和所述第二拍摄设备的拍摄间隔改变。The aircraft controller according to claim 130, wherein a photographing interval of said first photographing device and said second photographing device changes when said aircraft changes in height with respect to a surface.
  133. 根据权利要求132所述的飞行器控制器,其特征在于,当所述飞行器以统一的绝对高度飞行时,所述第一拍摄设备和所述第二拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影像重叠率,以及所述飞行器与地表的相对高度关联。The aircraft controller according to claim 132, wherein said first photographing device and said second photographing device are time-variant photographing intervals in a horizontal direction when said aircraft is flying at a uniform absolute height Shooting is performed wherein the shooting interval is associated with a pre-configured image overlay rate and the relative height of the aircraft to the surface.
  134. 一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行如权利要求1-46中任一项所述的输出影像生成方法。A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the output image generating method of any one of claims 1-46.
  135. 一种无人机,其特征在于,包括:A drone, characterized in that it comprises:
    机身;body;
    动力系统,安装在所述机身,用于提供飞行动力;a power system mounted to the fuselage for providing flight power;
    第一拍摄设备和第二拍摄设备,安装在所述机身,用于拍摄影像,其中,所述第一拍摄设备的FOV大于或等于预设阈值;a first photographing device and a second photographing device are mounted on the body for capturing an image, wherein an FOV of the first photographing device is greater than or equal to a preset threshold;
    以及如权利要求93-133中任一项所述的飞行器控制器。 And an aircraft controller according to any of claims 93-133.
PCT/CN2017/112202 2017-11-21 2017-11-21 Output image generation method, device and unmanned aerial vehicle WO2019100219A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/112202 WO2019100219A1 (en) 2017-11-21 2017-11-21 Output image generation method, device and unmanned aerial vehicle
CN201780026914.7A CN109076173A (en) 2017-11-21 2017-11-21 Image output generation method, equipment and unmanned plane

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/112202 WO2019100219A1 (en) 2017-11-21 2017-11-21 Output image generation method, device and unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
WO2019100219A1 true WO2019100219A1 (en) 2019-05-31

Family

ID=64822093

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/112202 WO2019100219A1 (en) 2017-11-21 2017-11-21 Output image generation method, device and unmanned aerial vehicle

Country Status (2)

Country Link
CN (1) CN109076173A (en)
WO (1) WO2019100219A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200371535A1 (en) * 2018-02-14 2020-11-26 SZ DJI Technology Co., Ltd. Automatic image capturing method and device, unmanned aerial vehicle and storage medium
US20230013031A1 (en) * 2020-03-20 2023-01-19 Huawei Technologies Co., Ltd. Display method and display control apparatus

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020150951A1 (en) * 2019-01-24 2020-07-30 深圳市大疆创新科技有限公司 Control terminal display method, device, and storage medium
CN111602176A (en) * 2019-06-03 2020-08-28 深圳市大疆创新科技有限公司 Method, system and storage medium for encoding and decoding position coordinates of point cloud data
CN110675450B (en) * 2019-09-06 2020-09-29 武汉九州位讯科技有限公司 Method and system for generating orthoimage in real time based on SLAM technology
WO2021046861A1 (en) * 2019-09-12 2021-03-18 深圳市大疆创新科技有限公司 Orthographic image generation method and system, and storage medium
CN112419176A (en) * 2020-11-10 2021-02-26 国网江西省电力有限公司电力科学研究院 Positive image point cloud enhancement method and device for single-loop power transmission channel conductor
CN112734630B (en) * 2020-12-30 2022-09-13 广州极飞科技股份有限公司 Ortho image processing method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120169842A1 (en) * 2010-12-16 2012-07-05 Chuang Daniel B Imaging systems and methods for immersive surveillance
CN103017739A (en) * 2012-11-20 2013-04-03 武汉大学 Manufacturing method of true digital ortho map (TDOM) based on light detection and ranging (LiDAR) point cloud and aerial image
CN104050649A (en) * 2014-06-13 2014-09-17 北京农业信息技术研究中心 Agricultural remote sensing system
CN105959576A (en) * 2016-07-13 2016-09-21 北京博瑞爱飞科技发展有限公司 Method and apparatus for shooting panorama by unmanned aerial vehicle
CN106204443A (en) * 2016-07-01 2016-12-07 成都通甲优博科技有限责任公司 A kind of panorama UAS based on the multiplexing of many mesh
CN107316325A (en) * 2017-06-07 2017-11-03 华南理工大学 A kind of airborne laser point cloud based on image registration and Image registration fusion method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8542286B2 (en) * 2009-11-24 2013-09-24 Microsoft Corporation Large format digital camera with multiple optical systems and detector arrays
CN102736128A (en) * 2011-09-21 2012-10-17 中国科学院地理科学与资源研究所 Method and device for processing unmanned plane optical remote sensing image data
US9046759B1 (en) * 2014-06-20 2015-06-02 nearmap australia pty ltd. Compact multi-resolution aerial camera system
US9052571B1 (en) * 2014-06-20 2015-06-09 nearmap australia pty ltd. Wide-area aerial camera systems
CN104268935A (en) * 2014-09-18 2015-01-07 华南理工大学 Feature-based airborne laser point cloud and image data fusion system and method
US10752378B2 (en) * 2014-12-18 2020-08-25 The Boeing Company Mobile apparatus for pest detection and engagement
US9824290B2 (en) * 2015-02-10 2017-11-21 nearmap australia pty ltd. Corridor capture

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120169842A1 (en) * 2010-12-16 2012-07-05 Chuang Daniel B Imaging systems and methods for immersive surveillance
CN103017739A (en) * 2012-11-20 2013-04-03 武汉大学 Manufacturing method of true digital ortho map (TDOM) based on light detection and ranging (LiDAR) point cloud and aerial image
CN104050649A (en) * 2014-06-13 2014-09-17 北京农业信息技术研究中心 Agricultural remote sensing system
CN106204443A (en) * 2016-07-01 2016-12-07 成都通甲优博科技有限责任公司 A kind of panorama UAS based on the multiplexing of many mesh
CN105959576A (en) * 2016-07-13 2016-09-21 北京博瑞爱飞科技发展有限公司 Method and apparatus for shooting panorama by unmanned aerial vehicle
CN107316325A (en) * 2017-06-07 2017-11-03 华南理工大学 A kind of airborne laser point cloud based on image registration and Image registration fusion method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200371535A1 (en) * 2018-02-14 2020-11-26 SZ DJI Technology Co., Ltd. Automatic image capturing method and device, unmanned aerial vehicle and storage medium
US20230013031A1 (en) * 2020-03-20 2023-01-19 Huawei Technologies Co., Ltd. Display method and display control apparatus

Also Published As

Publication number Publication date
CN109076173A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
WO2019100219A1 (en) Output image generation method, device and unmanned aerial vehicle
US20240029200A1 (en) Method and system for image generation
US11897606B2 (en) System and methods for improved aerial mapping with aerial vehicles
US11070725B2 (en) Image processing method, and unmanned aerial vehicle and system
JP6496323B2 (en) System and method for detecting and tracking movable objects
WO2020014909A1 (en) Photographing method and device and unmanned aerial vehicle
CN107492069B (en) Image fusion method based on multi-lens sensor
JP7251474B2 (en) Information processing device, information processing method, information processing program, image processing device, and image processing system
Barazzetti et al. True-orthophoto generation from UAV images: Implementation of a combined photogrammetric and computer vision approach
CN112461210B (en) Air-ground cooperative building surveying and mapping robot system and surveying and mapping method thereof
EP3358480B1 (en) Drawing creation device and drawing creation method
WO2023280038A1 (en) Method for constructing three-dimensional real-scene model, and related apparatus
CN108475442A (en) Augmented reality method, processor and unmanned plane for unmanned plane
CN110675448A (en) Ground light remote sensing monitoring method, system and storage medium based on civil aircraft
CN115330594A (en) Target rapid identification and calibration method based on unmanned aerial vehicle oblique photography 3D model
CN114812558A (en) Monocular vision unmanned aerial vehicle autonomous positioning method combined with laser ranging
CN108195359B (en) Method and system for acquiring spatial data
Abdullah et al. Camera calibration performance on different non-metric cameras.
Reich et al. Filling the Holes: potential of UAV-based photogrammetric façade modelling
WO2019100214A1 (en) Method, device, and unmanned aerial vehicle for generating output image
WO2021115192A1 (en) Image processing device, image processing method, program and recording medium
Fernández-Hernandez et al. A new trend for reverse engineering: Robotized aerial system for spatial information management
WO2021035746A1 (en) Image processing method and device, and movable platform
Zheng et al. A new flying range sensor: Aerial scan in omni-directions
WO2023047799A1 (en) Image processing device, image processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17932590

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17932590

Country of ref document: EP

Kind code of ref document: A1