WO2019100214A1 - Method, device, and unmanned aerial vehicle for generating output image - Google Patents

Method, device, and unmanned aerial vehicle for generating output image Download PDF

Info

Publication number
WO2019100214A1
WO2019100214A1 PCT/CN2017/112189 CN2017112189W WO2019100214A1 WO 2019100214 A1 WO2019100214 A1 WO 2019100214A1 CN 2017112189 W CN2017112189 W CN 2017112189W WO 2019100214 A1 WO2019100214 A1 WO 2019100214A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
posture
aircraft
processor
point cloud
Prior art date
Application number
PCT/CN2017/112189
Other languages
French (fr)
Chinese (zh)
Inventor
马岳文
张明磊
马东东
赵开勇
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201780029525.XA priority Critical patent/CN110073403A/en
Priority to PCT/CN2017/112189 priority patent/WO2019100214A1/en
Publication of WO2019100214A1 publication Critical patent/WO2019100214A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present application relates to the field of UAV application technologies, and in particular, to an output image generation method, device, and drone.
  • Digital Orthophoto Map is a digital aerial image/remote sensing image (monochrome/color) that is scanned and processed by digital elevation model. The projection difference is corrected by pixel, and then image mosaic. The image generated by stitching according to the range of the frame. Since the image uses a real terrain surface as a mosaic projection surface, it has real geographic coordinate information, and the true distance can be measured on the image.
  • the method for generating digital orthophotos in the prior art mainly collects the position and posture of the shooting device when shooting images by using a global positioning system (GPS) and an inertial measurement unit (IMU) mounted on the shooting device, and according to the position and posture, The image is projected onto the estimated average elevation surface, and the digital orthophoto is obtained after splicing.
  • GPS global positioning system
  • IMU inertial measurement unit
  • the embodiment of the invention provides an output image generation method, a device and a drone to obtain an output image with better stitching effect and reduce equipment cost.
  • a first aspect of the present invention provides a method for generating an output image, including:
  • the image is subjected to projection processing and image stitching processing to obtain an output image.
  • a second aspect of the embodiments of the present invention provides a ground station, including:
  • a communication interface one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
  • the communication interface is configured to: acquire an image captured by a photographing device mounted on the aircraft;
  • the processor is configured to: obtain, according to a preset image processing algorithm, a position and a posture of the photographing device when the image is captured;
  • the processor is further configured to perform a projection process and an image stitching process on the image to obtain an output image based on the position and the posture.
  • a third aspect of the embodiments of the present invention provides a controller, including:
  • a communication interface one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
  • the communication interface is configured to: acquire an image captured by a photographing device mounted on the aircraft;
  • the processor is configured to: obtain, according to a preset image processing algorithm, a position and a posture of the photographing device when the image is captured;
  • the processor is further configured to perform a projection process and an image stitching process on the image to obtain an output image based on the position and the posture.
  • a fourth aspect of an embodiment of the present invention provides a computer readable storage medium comprising instructions, when executed on a computer, causing a computer to execute the output image generating method of the first aspect described above.
  • a fifth aspect of the embodiments of the present invention provides a drone, including:
  • a power system mounted to the fuselage for providing flight power
  • a photographing device mounted on the body for capturing an image
  • the output image generating method, the device and the drone provided by the embodiment of the present invention obtain the image obtained by the shooting device mounted on the aircraft, and calculate the position of the shooting device when the image is captured based on the preset image processing algorithm.
  • the posture is based on the position and posture of the photographing device when the image is captured, and the image is subjected to image processing and image stitching processing to obtain an output image. Since the position and posture of the photographing device when photographing the image are obtained by the preset image processing algorithm in the embodiment of the present invention, it is not necessary to mount the high-precision GPS and IMU on the aircraft to obtain a more accurate position and posture. , so that it can be reduced while obtaining a better spliced output image Equipment cost.
  • FIG. 2 is a schematic diagram of a connection between a ground station and an aircraft according to an embodiment of the present invention
  • FIG. 3 is a flowchart of an image projection method according to an embodiment of the present invention.
  • 4a and 4b are schematic diagrams showing output images of two identical scenes provided by the present invention.
  • FIG. 5a and FIG. 5b are schematic diagrams of output images in two identical scenarios according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of a method for generating an output image according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a ground station according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a controller according to an embodiment of the present invention.
  • a component when referred to as being "fixed” to another component, it can be directly on the other component or the component can be present. When a component is considered to "connect” another component, it can be directly connected to another component or possibly a central component.
  • Embodiments of the present invention provide an output image generation method, which may be performed by a ground station Or a controller mounted on the drone to execute.
  • the following embodiment is a detailed description of the ground station.
  • the implementation manner of the controller is similar to that of the ground station, and is not described in this embodiment.
  • FIG. 1 is a flowchart of a method for generating an output image according to the present invention. As shown in FIG. 1 , the method in this embodiment includes:
  • Step 101 Acquire an image captured by a shooting device mounted on the aircraft.
  • the ground station in this embodiment is a device having a computing function and/or processing capability, and the device may specifically be a remote controller, a smart phone, a tablet computer, a laptop computer, a watch, a wristband, and the like, and combinations thereof.
  • the aircraft in this embodiment may specifically be a drone equipped with a photographing device, a helicopter, a manned fixed-wing aircraft, a hot air balloon, or the like.
  • the ground station 21 and the aircraft 22 can be connected through an Application Programming Interface (API) 23, but are not limited to being connected through an API.
  • API Application Programming Interface
  • the ground station 21 and the aircraft 22 can be connected by wire or wirelessly, for example, by at least one of the following methods: WIreless-Fidelity (WI-FI), Bluetooth, software defined radio (software defined radio) , referred to as SDR) or other custom protocols.
  • WI-FI WIreless-Fidelity
  • Bluetooth software defined radio
  • SDR software defined radio
  • the aircraft can perform automatic cruising and photographing according to a predetermined route, and can also perform cruising and photographing under the control of the ground station.
  • the shooting device performs shooting according to a preset shooting interval or distance interval, and images captured by adjacent shooting moments have overlapping portions, wherein the size of the overlapping portion can be set according to needs, for example, by setting The corresponding shooting interval or distance interval is used to obtain the size of the overlap portion required, but it is not limited to determining the size of the image overlapping portion by setting the shooting interval or the distance interval.
  • the photographing device of the aircraft in this embodiment can be photographed in the following possible ways:
  • the shooting interval of the aircraft's shooting device changes, for example, when the aircraft is flying at a uniform absolute altitude, the aircraft's shooting device changes in a horizontal direction.
  • Shooting interval where, specifically, The shooting interval can be determined based on the pre-configured image overlap rate and the relative height of the aircraft and the surface.
  • the ground station can obtain the image obtained by the shooting device by using the following possible ways:
  • the aircraft transmits the image captured by the photographing device to the ground station in real time through the API between it and the ground station.
  • the aircraft transmits the image captured by the photographing device within the preset time interval to the ground station at preset time intervals.
  • the aircraft transmits the images captured by the photographing device during the entire cruise to the ground station.
  • the aircraft may transmit the image captured by the photographing device to the ground station in the form of code stream data, or may send the image to the ground station in the form of a thumbnail, but according to the computing power of the aircraft and the ground station,
  • the resolution of the returned stream data or thumbnail is not specifically limited and may be the original image.
  • taking the form of a thumbnail as an example when the image is sent to the ground station in the form of a thumbnail, the ground station can display the received thumbnail so that the user can clearly see the image obtained by the real-time shooting. .
  • Step 102 Calculate, according to a preset image processing algorithm, a position and a posture of the photographing device when the image is captured.
  • the preset image processing method in this embodiment may specifically be a motion recovery structure algorithm, an aerial triangulation algorithm, and a simultaneous localization and mapping (SLAM) algorithm.
  • the SLAM algorithm is taken as an example to calculate the position and posture of the photographing device when the image is captured.
  • the method for calculating the position and posture of the photographing device when the image is taken by the SLAM algorithm is similar to the prior art, and will not be described herein.
  • the SLAM algorithm calculates the position and posture of the photographing device based on the matching of image feature points, and therefore, the position and posture obtained by the SLAM calculation are relative positions and relatives in the shooting scene. attitude.
  • the position and posture corresponding to the image are more practical reference values.
  • the aircraft sends the image to the ground station.
  • the GPS information of the shooting position of the image is transmitted to the ground station.
  • the ground station converts the calculated position and posture into the world based on the GPS information corresponding to the image.
  • Position and attitude in coordinates; in another embodiment, world coordinates may be acquired using a method of identifying the identified identifier, and the calculated position and pose are converted to positions and poses in world coordinates.
  • Step 103 Perform projection processing and image splicing processing on the image based on the position and the posture to obtain an output image.
  • the output image involved in this embodiment may be specifically an orthophoto, such as an orthophoto map or other image with real geographic coordinate information obtained according to orthographic projection.
  • the image projection method based on the position and posture of the photographing device (which may be the relative position and the relative posture calculated by the SLAM algorithm, or the position and posture in the world coordinate system) in the embodiment includes the following:
  • the image is projected onto the average elevation surface according to the position and attitude of the photographing device by estimating the average elevation surface.
  • the way to obtain the estimated average elevation surface is similar to the prior art and will not be described here.
  • FIG. 3 is a flowchart of a method for image projection according to an embodiment of the present invention.
  • a method for projecting an image includes:
  • Step 301 Calculate a semi-dense or dense point cloud of the image based on the position and the posture, or calculate a sparse point cloud of the image based on a SLAM algorithm.
  • Step 302 Fit the terrain cloud based on the calculated point cloud.
  • Step 303 Project the image onto the terrain surface based on the position and posture of the image.
  • the method for calculating the image dense point cloud, the semi-dense point cloud, or the sparse point cloud in the embodiment of FIG. 3 may be any method in the prior art, which is not specifically limited in this embodiment.
  • the image may be first projected onto the surface of the fitted terrain based on the method shown in FIG. 3, and then the image on the surface of the terrain is stitched to obtain a relatively rough image. Output image.
  • the point cloud obtained by the above calculation may be first divided according to ground points and non-ground points, and the terrain surface is fitted according to ground points in the point cloud. Step, and then project the image onto the terrain surface according to the position and posture of the shooting device when shooting the image.
  • the point cloud obtained by the above calculation or/and the position and posture obtained by the above calculation are optimized to obtain a point cloud meeting the preset quality condition. Or / and position and attitude that meet the preset accuracy conditions.
  • the optimized point cloud is divided into ground points and non-ground points, so that the digital elevation model is generated by the ground point fitting in the optimized point cloud, and the digital elevation model is projected as the projected terrain surface.
  • the timing of dividing the point cloud into the ground point and the non-ground point in this embodiment is not uniquely limited.
  • the point cloud may be first divided, and then the point cloud or/and the position and posture are optimized. This embodiment does not specifically limit it.
  • the splicing processing method of the image in this embodiment may be one of the following methods: a direct overlay method, a panoramic image splicing method, a method for selecting a region closest to the image center in each region of the final image, and a cost based method.
  • the splicing method of the function In this embodiment, a method of fitting a terrain surface by using a point cloud is taken as an example to determine a projection surface, and a projection on the projection surface is spliced based on a cost function splicing method, that is, a distance from the projected pixel to the photographing device is used as a constraint construction cost.
  • the function based on the cost function, splicing the projection of the image onto the surface of the terrain, so that the color difference on both sides of the splicing line is minimized.
  • the ground station can process the received image by using the following two working modes:
  • the ground station processes the received image in a ready-to-go process. That is to say, when the ground station is in the cruising of the aircraft, the received image is processed to obtain a semi-dense point cloud, a dense point cloud or a sparse point cloud of the image. In this way, the ground station updates the semi-dense point cloud, dense point cloud or sparse point cloud obtained by the processing for each image received.
  • the above-mentioned processing method is not only the processing method included in the literal meaning, but depends on the processing speed of the ground station. If the processing speed of the ground station can support the reception or processing, then The ground station processes the image immediately after receiving the image.
  • the ground station sequentially processes the received image. Specifically, the ground station can follow the image.
  • the receiving sequence is processed, and may be processed according to the storage order of the images, and may be processed according to other custom processing sequences, which is not specifically limited in this embodiment.
  • the global color adjustment and/or brightness adjustment of the calculated point cloud may be first performed to achieve the purpose of significantly improving image quality, and further, based on the adjusted Projecting the image, constructing the cost function by using the distance from the projected pixel to the photographing device as a constraint, and stitching the projection of the image onto the surface of the terrain based on the cost function, so that the color difference on both sides of the stitching line is minimized, so that the integrity can be obtained.
  • the cost function by using the distance from the projected pixel to the photographing device as a constraint, and stitching the projection of the image onto the surface of the terrain based on the cost function, so that the color difference on both sides of the stitching line is minimized, so that the integrity can be obtained.
  • the non-ground point in the point cloud may also be excluded, so that the splicing is performed.
  • the line can automatically avoid non-ground areas, resulting in a better visual output image.
  • FIG. 4a and FIG. 4b are schematic diagrams of output images of two identical scenes provided by the present invention, wherein FIG. 4a is an output image obtained by using an estimated elevation surface as a projection surface, and FIG. 4b is formed by a point cloud fitting.
  • the output image obtained by the terrain surface as the projection surface is spliced by the cost function method.
  • the output image of Fig. 4a produces a severe stitching misalignment.
  • the terrain surface formed by point cloud fitting is used as the projection surface, the terrain surface can be fitted more accurately, and the cost function can be used to minimize the chromatic aberration on both sides of the splicing line, so the output is obtained. There is no obvious dislocation phenomenon in the image, and the overall output image is better overall. Therefore, in the embodiment of the present invention, the terrain surface is fitted by the point cloud, and the cost function is used to perform the splicing processing, which can solve the problem of the output image mosaic misalignment.
  • the color and brightness of the projection on the terrain surface may be adjusted based on a preset strategy. This enables a better stitching effect in the subsequent stitching process.
  • FIG. 5a and FIG. 5b are schematic diagrams of output images in two identical scenes according to an embodiment of the present invention.
  • the projection on the terrain surface in FIG. 5a is not processed by color and brightness, and therefore, the entire output in FIG. 5a
  • the integrity of the image in terms of color and brightness is not very good, depending on The effect is poor, and in Figure 5b, the brightness and color of the projection on the terrain surface are processed before the splicing, so the resulting output image is better in color and brightness, and the visual effect is better. it is good. Therefore, the embodiment of the present invention can effectively improve the visual effect of the output image by performing color and brightness processing on the projection on the terrain surface before the splicing process.
  • the step of displaying an output image may be further included, wherein the output image may be an orthophoto.
  • orthophotos are measurable, they can provide a large amount of geographic information, especially in the context of natural disasters such as earthquakes, as well as in agriculture, mapping, and transportation planning.
  • the output image generating method provided by the embodiment obtains an image captured by a photographing device mounted on the aircraft, and calculates a position and a posture of the photographing device when the image is captured based on a preset image processing algorithm, so that the photographing device is based on the photographing device.
  • the position and posture of the image are taken, and the image is processed by image processing and image stitching to obtain an output image. Since the position and posture of the photographing device when photographing the image are obtained by the preset image processing algorithm in the embodiment of the present invention, it is not necessary to mount the high-precision GPS and IMU on the aircraft to obtain a more accurate position and posture. Therefore, it is possible to reduce the equipment cost while obtaining a better spliced output image.
  • the present embodiment can generate real-time orthophotos, the integrated solution from image acquisition to orthophoto generation is greatly improved in comparison with the existing non-real-time orthophoto generation solution.
  • Job productivity, non-real-time orthophoto generation solutions typically require images to be imported from the aircraft to the computer and manipulated by the software for processing, and typically require several hours of processing time.
  • the orthophoto generation scheme provided in this embodiment can acquire a higher-precision orthophoto of the survey area after the aircraft data acquisition operation ends.
  • the embodiment of the present invention provides an output image generating method.
  • the photographing device may be specifically a camera.
  • an operator sets a cruise area and a cruise route for the aircraft through the ground station, and the aircraft collects images according to the flight route in the cruise area, and the aircraft collects the obtained images to take a thumbnail or
  • the code stream is sent to the ground station.
  • the ground station After receiving the image, the ground station initializes the SLAM algorithm and generates the initial semi-dense point of the shooting scene. Further, through the SLAM algorithm, the position and posture of the camera when the image is captured are calculated and generated by dense matching. The semi-dense point of the image.
  • the ground station fits the terrain surface based on the semi-dense points and projects the received image onto the fitted terrain surface according to the position and attitude at the time of shooting. Then the cost function is constructed based on the principle of minimum color difference on both sides of the stitching line. The cost function is used to find the optimal stitching line to splicing the image on the terrain surface.
  • the ground station may further determine whether the aircraft has completed image acquisition according to the time of the aircraft cruise or the number of images returned by the aircraft, and if so, the camera position and posture corresponding to the acquired image based on the SLAM algorithm, and The semi-dense point cloud is optimized to obtain a position and attitude that meets the preset accuracy requirements, as well as a point cloud that meets the preset quality requirements. Further, the ground station classifies the optimized point cloud, divides the point cloud into ground points and non-ground points, and re-fitting the terrain surface based on the divided ground points to re-project the image onto the re-fitted terrain surface. on.
  • the ground station can also perform global color adjustment on the projection on the surface of the terrain to ensure color consistency, and further construct a cost function to automatically wrap around when selecting the stitching line. Passing through non-ground points (such as buildings and other objects), the resulting output image will not have the problem of misalignment, and has a good visual effect.
  • FIG. 7 is a schematic structural diagram of a ground station according to an embodiment of the present invention.
  • the ground station 10 includes: a communication interface 11, one or more processors 12; and one or more processors work independently or in cooperation.
  • the interface 11 is connected to the processor 12; the communication interface 11 is configured to: acquire an image captured by a photographing device mounted on the aircraft; and the processor 12 is configured to: obtain, according to a preset image processing algorithm, the photographing device to capture the image The position and posture of the time; the processor 12 is further configured to: perform projection processing and image stitching processing on the image to obtain an output image based on the position and the posture.
  • the communication interface 11 is configured to: acquire code stream data of an image captured by a photographing device mounted on an aircraft.
  • the communication interface 11 is configured to: acquire a thumbnail of an image captured by a photographing device mounted on the aircraft.
  • the ground station further includes a display component 13, the display component 13 and the location
  • the processor 12 is communicatively coupled; the display component 13 is configured to: display the acquired thumbnail.
  • the communication interface 11 is further configured to: acquire GPS information of the photographing device when the image is captured; the processor 12 is further configured to: based on the GPS information corresponding to the image, The position is converted to a position in the world coordinate system, and the posture is converted into a posture in the world coordinate system.
  • the processor 12 is configured to: calculate a position and a posture of the photographing device when the image is captured, based on an instant positioning and map construction SLAM algorithm.
  • the processor 12 is configured to: construct a cost function, and perform splicing processing on the projection of the image onto the surface of the terrain based on the cost function.
  • the processor 12 is further configured to: perform optimization processing on the point cloud to obtain a point cloud that meets a preset quality condition; and the processor 12 is configured to: form a terrain based on the optimized point cloud fitting surface.
  • the processor 12 is configured to: extract a ground point from the optimized point cloud; and fit the terrain surface based on the ground point.
  • the processor 12 is configured to: perform optimization processing on the position and the posture, and obtain a position and a posture that meet a preset accuracy condition.
  • the processor 12 is configured to: perform color and/or brightness adjustment on a projection of the image on the terrain surface based on a preset policy.
  • the output image includes an orthophoto.
  • the display component 13 is configured to display the orthophoto.
  • the processor 12 is configured to: control the shooting device of the aircraft to shoot at the same shooting interval in the horizontal direction.
  • the processor 12 is configured to: control the shooting device of the aircraft to change the shooting interval for shooting.
  • the processor 12 is configured to: the photographing device that controls the aircraft is photographed in a horizontal direction at a time-lapse photographing interval, wherein the photographing interval is The overlap rate with the pre-configured image and the relative height of the aircraft to the surface.
  • the ground station provided in this embodiment can perform the technical solution of the embodiment of FIG. 1 and its execution manner Similar to the beneficial effects, it will not be described here.
  • the embodiment of the present invention further provides a ground station.
  • the ground station is based on the embodiment of FIG. 7.
  • the processor 12 is configured to: calculate a semi-dense or dense point cloud of the image based on the position and the posture. Or calculating a sparse point cloud of the image based on the SLAM algorithm; fitting the terrain cloud based on the calculated point cloud; and projecting the image onto the terrain surface based on the position and orientation of the image.
  • the ground station provided by this embodiment can perform the technical solution of the embodiment of FIG. 3, and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • FIG. 8 is a schematic structural diagram of a controller according to an embodiment of the present invention.
  • the controller 20 includes: a communication interface 21, one or more processors 22; and one or more processors alone or Working in cooperation, the communication interface 21 is connected to the processor 22; the communication interface 21 is configured to: acquire an image captured by a photographing device mounted on the aircraft; and the processor 22 is configured to: calculate, according to a preset image processing algorithm, the photographing device The position and posture of the image is taken; the processor 22 is further configured to: perform projection processing and image stitching processing on the image based on the position and the posture to obtain an output image.
  • the communication interface 21 is configured to: acquire code stream data of an image captured by a photographing device mounted on the aircraft.
  • the communication interface 21 is configured to: acquire a thumbnail of an image captured by a photographing device mounted on the aircraft.
  • the communication interface 21 is further configured to: acquire GPS information when the imaging device captures the image; and the processor 22 is further configured to: based on the GPS information corresponding to the image, The position is converted to a position in the world coordinate system, and the posture is converted into a posture in the world coordinate system.
  • the processor 22 is configured to: calculate a position and a posture of the photographing device when the image is captured based on a real-time positioning and map construction SLAM algorithm.
  • the processor 22 is configured to: construct a cost function, and perform splicing processing on the projection of the image onto the surface of the terrain based on the cost function.
  • the processor 22 is further configured to: perform optimization processing on the point cloud to obtain a character a point cloud in combination with a preset quality condition; the processor 22 is configured to form a terrain surface based on the optimized point cloud fitting.
  • the processor 22 is configured to: extract a ground point from the optimized point cloud; and fit the terrain surface based on the ground point.
  • the processor 22 is configured to: perform optimization processing on the position and the posture, and obtain a position and a posture that meet a preset accuracy condition.
  • the processor 22 is configured to perform color and/or brightness adjustment on a projection of the image on the terrain surface based on a preset policy.
  • the output image includes an orthophoto.
  • the processor 22 is configured to: control the shooting device of the aircraft to shoot at the same shooting interval in the horizontal direction.
  • the processor 22 is configured to: control the shooting device of the aircraft to change the shooting interval for shooting.
  • the processor 22 is configured to: the photographing device that controls the aircraft is photographed in a horizontal direction at a time-lapse photographing interval, wherein the photographing interval is The overlap rate with the pre-configured image and the relative height of the aircraft to the surface.
  • the controller provided in this embodiment can perform the technical solution of the embodiment of FIG. 1 , and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • the embodiment of the present invention further provides a controller, based on the embodiment of FIG. 8, the processor 22 is configured to: calculate a semi-dense or dense point cloud of the image based on the position and the posture. Or calculating a sparse point cloud of the image based on the SLAM algorithm; fitting the terrain cloud based on the calculated point cloud; and projecting the image onto the terrain surface based on the position and orientation of the image.
  • the controller provided in this embodiment can perform the technical solution of the embodiment of FIG. 3, and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • Embodiments of the present invention provide a computer readable storage medium including instructions when it is in a calculation When the machine is running, the computer is caused to execute the output image generating method provided by the above embodiment.
  • Embodiments of the present invention provide a drone.
  • the drone includes a fuselage; a power system mounted to the body for providing flight power; a photographing device mounted to the body for capturing an image; and a controller as described in the above embodiments.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The embodiments of the present invention provide a method, device, and unmanned aerial vehicle for generating an output image. The method comprises: obtaining an image by an image capture device installed on an aircraft; on the basis of a preset image processing algorithm, calculating a position and attitude of the image capture device when the image was captured; and on the basis of the position and attitude, performing projection processing and image stitching processing on the image to obtain an output image. The method, device, and unmanned aerial vehicle provided by the embodiments of the invention can reduce equipment costs while producing a better stitched output image.

Description

输出影像生成方法、设备及无人机Output image generation method, device and drone 技术领域Technical field
本申请涉及无人机应用技术领域,尤其涉及一种输出影像生成方法、设备及无人机。The present application relates to the field of UAV application technologies, and in particular, to an output image generation method, device, and drone.
背景技术Background technique
数字正射影像(Digital Orthophoto Map,简称DOM)是利用数字高程模型对扫描处理的数字化的航空像片/遥感影像(单色/彩色),经逐个象元进行投影差改正,再按影像镶嵌,根据图幅范围进行拼接生成的影像。该影像由于使用了真实的地形表面为拼接投影面,因此具备真实的地理坐标信息,可以在该影像上度量真实的距离。Digital Orthophoto Map (DOM) is a digital aerial image/remote sensing image (monochrome/color) that is scanned and processed by digital elevation model. The projection difference is corrected by pixel, and then image mosaic. The image generated by stitching according to the range of the frame. Since the image uses a real terrain surface as a mosaic projection surface, it has real geographic coordinate information, and the true distance can be measured on the image.
现有技术中生成数字正射影像的方法主要是通过拍摄设备搭载的全球定位系统(GPS)和惯性测量单元(IMU)采集拍摄设备在拍摄影像时的位置和姿态,并根据该位置和姿态将影像投影到预估的平均高程面上,经过拼接处理后得到数字正射影像。The method for generating digital orthophotos in the prior art mainly collects the position and posture of the shooting device when shooting images by using a global positioning system (GPS) and an inertial measurement unit (IMU) mounted on the shooting device, and according to the position and posture, The image is projected onto the estimated average elevation surface, and the digital orthophoto is obtained after splicing.
但是由于高精度的GPS和IMU价格较昂贵,若采用高精度的GPS和IMU的话,成本较高,而采用较低精度的GPS和IMU的话,虽然设备的成本有所降低,但基于低精度的位置和姿态得到的影像的拼接效果较差。However, due to the high price of high-precision GPS and IMU, if high-precision GPS and IMU are used, the cost is high, and with lower precision GPS and IMU, although the cost of the equipment is reduced, it is based on low precision. The stitching effect of the image obtained by the position and posture is poor.
发明内容Summary of the invention
本发明实施例提供一种输出影像生成方法、设备及无人机,以得到拼接效果较好的输出影像,并降低设备成本。The embodiment of the invention provides an output image generation method, a device and a drone to obtain an output image with better stitching effect and reduce equipment cost.
本发明实施例的第一方面是提供一种输出影像生成方法,包括:A first aspect of the present invention provides a method for generating an output image, including:
获取飞行器上搭载的拍摄设备拍摄获得的影像;Obtaining images obtained by shooting equipment carried on the aircraft;
基于预设图像处理算法,计算获得所述拍摄设备在拍摄所述影像时的位置和姿态;Calculating a position and a posture of the photographing device when the image is captured based on a preset image processing algorithm;
基于所述位置和所述姿态,对所述影像进行投影处理和影像拼接处理,获得输出影像。Based on the position and the posture, the image is subjected to projection processing and image stitching processing to obtain an output image.
本发明实施例的第二方面是提供一种地面站,包括: A second aspect of the embodiments of the present invention provides a ground station, including:
通信接口、一个或多个处理器;所述一个或多个处理器单独或协同工作,所述通信接口和所述处理器连接;a communication interface, one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
所述通信接口用于:获取飞行器上搭载的拍摄设备拍摄获得的影像;The communication interface is configured to: acquire an image captured by a photographing device mounted on the aircraft;
所述处理器用于:基于预设图像处理算法,计算获得所述拍摄设备在拍摄所述影像时的位置和姿态;The processor is configured to: obtain, according to a preset image processing algorithm, a position and a posture of the photographing device when the image is captured;
所述处理器还用于:基于所述位置和所述姿态,对所述影像进行投影处理和影像拼接处理,获得输出影像。The processor is further configured to perform a projection process and an image stitching process on the image to obtain an output image based on the position and the posture.
本发明实施例的第三方面是提供一种控制器,包括:A third aspect of the embodiments of the present invention provides a controller, including:
通信接口、一个或多个处理器;所述一个或多个处理器单独或协同工作,所述通信接口和所述处理器连接;a communication interface, one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
所述通信接口用于:获取飞行器上搭载的拍摄设备拍摄获得的影像;The communication interface is configured to: acquire an image captured by a photographing device mounted on the aircraft;
所述处理器用于:基于预设图像处理算法,计算获得所述拍摄设备在拍摄所述影像时的位置和姿态;The processor is configured to: obtain, according to a preset image processing algorithm, a position and a posture of the photographing device when the image is captured;
所述处理器还用于:基于所述位置和所述姿态,对所述影像进行投影处理和影像拼接处理,获得输出影像。The processor is further configured to perform a projection process and an image stitching process on the image to obtain an output image based on the position and the posture.
本发明实施例的第四方面是提供一种计算机可读存储介质,该计算机可读存储介质包括指令,当其在计算机上运行时,使得计算机执行上述第一方面所述的输出影像生成方法。A fourth aspect of an embodiment of the present invention provides a computer readable storage medium comprising instructions, when executed on a computer, causing a computer to execute the output image generating method of the first aspect described above.
本发明实施例的第五方面是提供一种无人机,包括:A fifth aspect of the embodiments of the present invention provides a drone, including:
机身;body;
动力系统,安装在所述机身,用于提供飞行动力;a power system mounted to the fuselage for providing flight power;
拍摄设备,安装在所述机身,用于拍摄影像;a photographing device mounted on the body for capturing an image;
以及上述第三方面所述的控制设备。And the control device of the above third aspect.
本发明实施例提供的输出影像生成方法、设备及无人机,通过获取飞行器上搭载的拍摄设备拍摄获得的影像,并基于预设图像处理算法,计算获得拍摄设备在拍摄该影像时的位置和姿态,从而基于拍摄设备在拍摄该影像时的位置和姿态,对该影像进行图像处理和影像拼接处理,获得输出图像。由于在本发明实施例中拍摄设备在拍摄影像时的位置和姿态是通过预设图像处理算法来获得的,因而不需要在飞行器上搭载高精度的GPS和IMU即可获得较为精确的位置和姿态,从而能够在获得拼接较好的输出影像的同时,降低 设备成本。The output image generating method, the device and the drone provided by the embodiment of the present invention obtain the image obtained by the shooting device mounted on the aircraft, and calculate the position of the shooting device when the image is captured based on the preset image processing algorithm. The posture is based on the position and posture of the photographing device when the image is captured, and the image is subjected to image processing and image stitching processing to obtain an output image. Since the position and posture of the photographing device when photographing the image are obtained by the preset image processing algorithm in the embodiment of the present invention, it is not necessary to mount the high-precision GPS and IMU on the aircraft to obtain a more accurate position and posture. , so that it can be reduced while obtaining a better spliced output image Equipment cost.
附图说明DRAWINGS
图1为本发明提供的输出影像生成方法的流程图;1 is a flowchart of an output image generating method provided by the present invention;
图2为本发明实施例提供的地面站与飞行器的连接示意图;2 is a schematic diagram of a connection between a ground station and an aircraft according to an embodiment of the present invention;
图3为本发明实施例提供的影像投射方法的流程图;FIG. 3 is a flowchart of an image projection method according to an embodiment of the present invention;
图4a和图4b是本发明提供的两个相同场景的输出影像示意图;4a and 4b are schematic diagrams showing output images of two identical scenes provided by the present invention;
图5a和图5b为本发明实施例提供的两个相同场景下的输出影像示意图;5a and FIG. 5b are schematic diagrams of output images in two identical scenarios according to an embodiment of the present invention;
图6为本发明实施例提供的输出影像生成方法的流程图;FIG. 6 is a flowchart of a method for generating an output image according to an embodiment of the present invention;
图7为本发明实施例提供的地面站的结构示意图;FIG. 7 is a schematic structural diagram of a ground station according to an embodiment of the present invention;
图8为本发明实施例提供的控制器的结构示意图。FIG. 8 is a schematic structural diagram of a controller according to an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly described with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。It should be noted that when a component is referred to as being "fixed" to another component, it can be directly on the other component or the component can be present. When a component is considered to "connect" another component, it can be directly connected to another component or possibly a central component.
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。All technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, unless otherwise defined. The terminology used in the description of the present invention is for the purpose of describing particular embodiments and is not intended to limit the invention. The term "and/or" used herein includes any and all combinations of one or more of the associated listed items.
下面结合附图,对本发明的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。Some embodiments of the present invention are described in detail below with reference to the accompanying drawings. The features of the embodiments and examples described below can be combined with each other without conflict.
本发明实施例提供一种输出影像生成方法,该方法可以由一种地面站 或搭载在无人机上的控制器来执行。以下实施例是以地面站为例所做的具体说明,控制器的执行方式与地面站类似,本实施例不做赘述。参见图1,图1为本发明提供的输出影像生成方法的流程图,如图1所示,本实施例中的方法,包括:Embodiments of the present invention provide an output image generation method, which may be performed by a ground station Or a controller mounted on the drone to execute. The following embodiment is a detailed description of the ground station. The implementation manner of the controller is similar to that of the ground station, and is not described in this embodiment. Referring to FIG. 1 , FIG. 1 is a flowchart of a method for generating an output image according to the present invention. As shown in FIG. 1 , the method in this embodiment includes:
步骤101、获取飞行器上搭载的拍摄设备拍摄获得的影像。Step 101: Acquire an image captured by a shooting device mounted on the aircraft.
本实施例中地面站是一种具有计算功能和/或处理能力的设备,该设备具体可以是遥控器、智能手机、平板电脑、膝上型电脑、手表、手环等及其组合。The ground station in this embodiment is a device having a computing function and/or processing capability, and the device may specifically be a remote controller, a smart phone, a tablet computer, a laptop computer, a watch, a wristband, and the like, and combinations thereof.
本实施例中的飞行器具体可以是搭载有拍摄设备的无人机、直升机、载人固定翼飞行器、热气球等。The aircraft in this embodiment may specifically be a drone equipped with a photographing device, a helicopter, a manned fixed-wing aircraft, a hot air balloon, or the like.
如图2所示,地面站21和飞行器22可以通过应用程序编程接口(Application Programming Interface,简称API)23连接,但是并不仅限于通过API来进行连接。具体的,地面站21和飞行器22可以通过有线或者无线的方式连接,例如,通过如下至少一种方式连接:无线保真(WIreless-Fidelity,简称WI-FI)、蓝牙、软件无线电(software defined radio,简称SDR)或者其他自定义协议。As shown in FIG. 2, the ground station 21 and the aircraft 22 can be connected through an Application Programming Interface (API) 23, but are not limited to being connected through an API. Specifically, the ground station 21 and the aircraft 22 can be connected by wire or wirelessly, for example, by at least one of the following methods: WIreless-Fidelity (WI-FI), Bluetooth, software defined radio (software defined radio) , referred to as SDR) or other custom protocols.
可选的,本实施例中飞行器可以按照预定的航线进行自动巡航和拍摄,也可以在地面站的控制下进行巡航和拍摄。Optionally, in this embodiment, the aircraft can perform automatic cruising and photographing according to a predetermined route, and can also perform cruising and photographing under the control of the ground station.
本实施例中,拍摄设备按照预设的拍摄间隔或者距离间隔进行拍摄,且相邻拍摄时刻拍摄的影像之间具有重合部分,其中,重合部分的大小可以根据需要来设定,比如可以通过设置相应的拍摄间隔或距离间隔来获得所需要的重合部分的大小,但是并不是仅限于通过设置拍摄间隔或距离间隔来确定影像重合部分的大小。In this embodiment, the shooting device performs shooting according to a preset shooting interval or distance interval, and images captured by adjacent shooting moments have overlapping portions, wherein the size of the overlapping portion can be set according to needs, for example, by setting The corresponding shooting interval or distance interval is used to obtain the size of the overlap portion required, but it is not limited to determining the size of the image overlapping portion by setting the shooting interval or the distance interval.
示例的,本实施例中飞行器的拍摄设备可以如下几种可能的方式进行拍摄:For example, the photographing device of the aircraft in this embodiment can be photographed in the following possible ways:
在一种可能的方式中:当飞行器相对于地表以固定的相对高度飞行时,飞行器的拍摄设备在水平方向上以相同的拍摄间隔进行拍摄。In one possible way: when the aircraft is flying at a fixed relative height relative to the surface, the aircraft's camera is photographed at the same shooting interval in the horizontal direction.
在另一种可能的方式中:当飞行器相对于地表高度改变时,飞行器的拍摄设备的拍摄间隔改变,比如,当飞行器以统一的绝对高度飞行时,飞行器的拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,具体的, 拍摄间隔可以根据预先配置的影像重叠率,以及飞行器与地表的相对高度进行确定。当然这里仅为示例说明而不是对本发明的唯一限定。可选的,本实施例中地面站可以通过如下几种可能的方式获得拍摄设备拍摄获得的影像:In another possible manner: when the altitude of the aircraft changes relative to the surface, the shooting interval of the aircraft's shooting device changes, for example, when the aircraft is flying at a uniform absolute altitude, the aircraft's shooting device changes in a horizontal direction. Shooting interval, where, specifically, The shooting interval can be determined based on the pre-configured image overlap rate and the relative height of the aircraft and the surface. Of course, this is merely an illustration and not a limitation of the invention. Optionally, in the embodiment, the ground station can obtain the image obtained by the shooting device by using the following possible ways:
在一种可能的方式中,飞行器通过其与地面站之间的API将拍摄设备拍摄获得的影像实时的发送给地面站。In one possible way, the aircraft transmits the image captured by the photographing device to the ground station in real time through the API between it and the ground station.
在另一种可能的方式中,飞行器按照预设的时间间隔将拍摄设备在预设时间间隔内拍摄获得的影像发送给地面站。In another possible manner, the aircraft transmits the image captured by the photographing device within the preset time interval to the ground station at preset time intervals.
在又一种可能的方式中,飞行器在巡航结束后,将拍摄设备在整个巡航过程中拍摄获得的影像集中发送给地面站。In yet another possible manner, after the cruise is over, the aircraft transmits the images captured by the photographing device during the entire cruise to the ground station.
具体的,基于以上方式,飞行器可以将拍摄设备拍摄的影像以码流数据的形式发送给地面站,也可以以缩略图的形式发送给地面站,但是根据飞行器和地面站的计算能力,对于传回的码流数据或缩略图的分辨率没有具体限制,可以是原始影像。本实施例中以缩略图的形式为例,当影像以缩略图的形式发送给地面站时,地面站可以对接收到的缩略图进行显示,以使用户能够清楚的看到实时拍摄获得的影像。Specifically, based on the above manner, the aircraft may transmit the image captured by the photographing device to the ground station in the form of code stream data, or may send the image to the ground station in the form of a thumbnail, but according to the computing power of the aircraft and the ground station, The resolution of the returned stream data or thumbnail is not specifically limited and may be the original image. In this embodiment, taking the form of a thumbnail as an example, when the image is sent to the ground station in the form of a thumbnail, the ground station can display the received thumbnail so that the user can clearly see the image obtained by the real-time shooting. .
步骤102、基于预设图像处理算法,计算获得所述拍摄设备在拍摄所述影像时的位置和姿态。Step 102: Calculate, according to a preset image processing algorithm, a position and a posture of the photographing device when the image is captured.
本实施例中预设图像处理方法具体可以是运动恢复结构算法、空中三角测量算法和即时定位与地图构建(simultaneous localization and mapping,简称SLAM)算法。本实施例以SLAM算法为例,计算获得拍摄设备在拍摄影像时的位置和姿态。其中,通过SLAM算法计算拍摄设备在拍摄影像时的位置和姿态的方法与现有技术类似,在这里不再赘述。The preset image processing method in this embodiment may specifically be a motion recovery structure algorithm, an aerial triangulation algorithm, and a simultaneous localization and mapping (SLAM) algorithm. In this embodiment, the SLAM algorithm is taken as an example to calculate the position and posture of the photographing device when the image is captured. The method for calculating the position and posture of the photographing device when the image is taken by the SLAM algorithm is similar to the prior art, and will not be described herein.
这里需要说明的是,在现有算法中SLAM算法是基于影像特征点的匹配来计算拍摄设备的位置和姿态的,因此,通过SLAM计算获得的位置和姿态是在拍摄场景下的相对位置和相对姿态。It should be noted here that in the existing algorithm, the SLAM algorithm calculates the position and posture of the photographing device based on the matching of image feature points, and therefore, the position and posture obtained by the SLAM calculation are relative positions and relatives in the shooting scene. attitude.
可选的,为了使上述计算获得的位置和姿态能够对应到世界坐标系下,使得影像对应的位置和姿态更具实际参考价值,本实施例中,飞行器在将影像发送给地面站的同时还将影像的拍摄位置的GPS信息发送给地面站。地面站根据影像对应的GPS信息将计算获得的位置和姿态转换为世界 坐标下的位置和姿态;在另一个实施例中,可以采用识别已确定标识物的方法获取世界坐标,将计算获得的位置和姿态转换为世界坐标下的位置和姿态。Optionally, in order to make the position and posture obtained by the above calculation correspond to the world coordinate system, the position and posture corresponding to the image are more practical reference values. In this embodiment, the aircraft sends the image to the ground station. The GPS information of the shooting position of the image is transmitted to the ground station. The ground station converts the calculated position and posture into the world based on the GPS information corresponding to the image. Position and attitude in coordinates; in another embodiment, world coordinates may be acquired using a method of identifying the identified identifier, and the calculated position and pose are converted to positions and poses in world coordinates.
步骤103、基于所述位置和所述姿态,对所述影像进行投影处理和影像拼接处理,获得输出影像。Step 103: Perform projection processing and image splicing processing on the image based on the position and the posture to obtain an output image.
可选的,本实施例中涉及的输出影像可以被具体为正射影像,比如正射地图或其他根据正射投影拼接获得的具备真实地理坐标信息的影像。Optionally, the output image involved in this embodiment may be specifically an orthophoto, such as an orthophoto map or other image with real geographic coordinate information obtained according to orthographic projection.
具体的,本实施例中基于拍摄设备的位置和姿态(可以是采用SLAM算法计算获得的相对位置和相对姿态,也可以是世界坐标系下的位置和姿态)的影像投影方法包括如下几种:Specifically, the image projection method based on the position and posture of the photographing device (which may be the relative position and the relative posture calculated by the SLAM algorithm, or the position and posture in the world coordinate system) in the embodiment includes the following:
在一种可能的实现方式中,通过预估平均高程面的方式,将影像按照拍摄设备的位置和姿态投影到平均高程面上。其中获得预估平均高程面的方式与现有技术类似,在这里不再赘述。In one possible implementation, the image is projected onto the average elevation surface according to the position and attitude of the photographing device by estimating the average elevation surface. The way to obtain the estimated average elevation surface is similar to the prior art and will not be described here.
在另一种可能的实现方式中,通过点云拟合的方式拟合地形表面,将影像按照拍摄设备的位置和姿态,投影到地形表面上。具体的,图3为本发明实施例提供的影像投射方法的流程图,如图3所示,在这种实现方式下,影像的投影方法包括:In another possible implementation, the terrain surface is fitted by a point cloud fitting method, and the image is projected onto the terrain surface according to the position and posture of the photographing device. Specifically, FIG. 3 is a flowchart of a method for image projection according to an embodiment of the present invention. As shown in FIG. 3 , in this implementation manner, a method for projecting an image includes:
步骤301、基于所述位置和所述姿态,计算获得所述影像的半稠密或稠密点云,或者基于SLAM算法,计算获得所述影像的稀疏点云。Step 301: Calculate a semi-dense or dense point cloud of the image based on the position and the posture, or calculate a sparse point cloud of the image based on a SLAM algorithm.
步骤302、基于计算获得的点云拟合地形表面。Step 302: Fit the terrain cloud based on the calculated point cloud.
步骤303、基于所述影像的位置和姿态,将所述影像投射到所述地形表面。Step 303: Project the image onto the terrain surface based on the position and posture of the image.
其中,在图3实施例中计算影像稠密点云、半稠密点云或者稀疏点云方法,可以是现有技术中的任意一种方法,本实施例中不对其进行具体限定。The method for calculating the image dense point cloud, the semi-dense point cloud, or the sparse point cloud in the embodiment of FIG. 3 may be any method in the prior art, which is not specifically limited in this embodiment.
实际场景中,在基于影像计算获得拍摄设备的位置和姿态之后,可以基于图3所示的方法先将影像投影到拟合的地形表面上,再对地形表面上的影像进行拼接处理得到相对粗略的输出影像。In the actual scene, after obtaining the position and posture of the photographing device based on the image calculation, the image may be first projected onto the surface of the fitted terrain based on the method shown in FIG. 3, and then the image on the surface of the terrain is stitched to obtain a relatively rough image. Output image.
可选的,在进行影像投影处理时,可以先对上述计算获得的点云按照地面点和非地面点进行划分,并根据点云中的地面点拟合地形表面,进一 步的,再依据拍摄设备拍摄影像时的位置和姿态将影像投影到地形表面上。Optionally, when performing image projection processing, the point cloud obtained by the above calculation may be first divided according to ground points and non-ground points, and the terrain surface is fitted according to ground points in the point cloud. Step, and then project the image onto the terrain surface according to the position and posture of the shooting device when shooting the image.
进一步的,若想要获得更精确清晰的输出影像,在步骤301之后还以对上述计算获得的点云或/及上述计算获得的位置和姿态进行优化处理,得到符合预设质量条件的点云或/及符合预设精度条件的位置和姿态。再将优化处理后的点云划分为地面点和非地面点,从而利用经过优化处理的点云中的地面点拟合生成数字高程模型,并将该数字高程模型作为投影的地形表面进行投影。当然,本实施例中将点云划分为地面点和非地面点的时机并不是唯一限定的,实际上,也可以先对点云进行划分,再对点云或/及位置和姿态进行优化处理,本实施例不对其做具体的限定。Further, if it is desired to obtain a more accurate and clear output image, after step 301, the point cloud obtained by the above calculation or/and the position and posture obtained by the above calculation are optimized to obtain a point cloud meeting the preset quality condition. Or / and position and attitude that meet the preset accuracy conditions. Then, the optimized point cloud is divided into ground points and non-ground points, so that the digital elevation model is generated by the ground point fitting in the optimized point cloud, and the digital elevation model is projected as the projected terrain surface. Certainly, the timing of dividing the point cloud into the ground point and the non-ground point in this embodiment is not uniquely limited. In fact, the point cloud may be first divided, and then the point cloud or/and the position and posture are optimized. This embodiment does not specifically limit it.
可选的,本实施例中影像的拼接处理方法具体可以是如下方法中的一种:直接覆盖法、全景图像拼接方法、最终影像每个区域选择离影像中心最近的影像的方法,以及基于代价函数的拼接方法。本实施例中以采用点云拟合地形表面的方法为例,确定投影表面,并基于代价函数的拼接方法对投影表面上的投影进行拼接,即将投影的像素到拍摄设备的距离作为约束构建代价函数,基于代价函数对影像投射到地形表面上的投影进行拼接处理,使得拼接线两侧的色彩差异最小。Optionally, the splicing processing method of the image in this embodiment may be one of the following methods: a direct overlay method, a panoramic image splicing method, a method for selecting a region closest to the image center in each region of the final image, and a cost based method. The splicing method of the function. In this embodiment, a method of fitting a terrain surface by using a point cloud is taken as an example to determine a projection surface, and a projection on the projection surface is spliced based on a cost function splicing method, that is, a distance from the projected pixel to the photographing device is used as a constraint construction cost. The function, based on the cost function, splicing the projection of the image onto the surface of the terrain, so that the color difference on both sides of the splicing line is minimized.
可选的,本实施例中地面站可以通过如下两种工作方式处理接收到的影像:Optionally, in this embodiment, the ground station can process the received image by using the following two working modes:
在一种可能的处理方式中,地面站采用即收即处理的方式对接收到的影像进行处理。也就是说,地面站在飞行器巡航拍摄时,就对接收到的影像进行处理,获得影像的半稠密点云、稠密点云或者稀疏点云。在这种处理方式下,地面站每接收到一张影像都要对处理获得的半稠密点云、稠密点云或者稀疏点云进行更新。另外需要说明的是,上述涉及的即收即处理方式并不仅是指字面含义所包括的处理方式,而是取决于地面站的处理速度,若地面站的处理速度能够支持即接收即处理,那么地面站在接收到影像后就对影像进行即时处理,若地面站的处理速度不足以支持影像的即时处理,那么,地面站就对接收到的影像进行依次处理,具体的,地面站可以按照影像的接收顺序进行处理,也可以按照影像的存储顺序进行处理,还可以按照其他自定义的处理顺序进行处理,本实施例中不做具体限定。 In one possible approach, the ground station processes the received image in a ready-to-go process. That is to say, when the ground station is in the cruising of the aircraft, the received image is processed to obtain a semi-dense point cloud, a dense point cloud or a sparse point cloud of the image. In this way, the ground station updates the semi-dense point cloud, dense point cloud or sparse point cloud obtained by the processing for each image received. In addition, it should be noted that the above-mentioned processing method is not only the processing method included in the literal meaning, but depends on the processing speed of the ground station. If the processing speed of the ground station can support the reception or processing, then The ground station processes the image immediately after receiving the image. If the processing speed of the ground station is insufficient to support the instant processing of the image, the ground station sequentially processes the received image. Specifically, the ground station can follow the image. The receiving sequence is processed, and may be processed according to the storage order of the images, and may be processed according to other custom processing sequences, which is not specifically limited in this embodiment.
在另一种可能的处理方式中,地面站在飞行器巡航拍摄时,只接收拍摄设备拍摄的影像,在飞行器结束巡航拍摄时,再对接收到的影像进行集中处理。In another possible processing mode, when the ground station is in the cruising of the aircraft, only the image captured by the shooting device is received, and when the aircraft ends the cruise shooting, the received image is processed centrally.
可选的,在对影像的投影进行拼接处理时,可以首先对计算获得的点云进行全局的色彩调整或/及亮度调整,以达到明显改善影像质量的目的,进一步的,再基于调整后的投影影像,将投影的像素到拍摄设备的距离作为约束构建代价函数,基于代价函数对影像投射到地形表面上的投影进行拼接处理,使得拼接线两侧的色彩差异最小,这样就能够得到整体性较好的输出影像。Optionally, when splicing the image projection, the global color adjustment and/or brightness adjustment of the calculated point cloud may be first performed to achieve the purpose of significantly improving image quality, and further, based on the adjusted Projecting the image, constructing the cost function by using the distance from the projected pixel to the photographing device as a constraint, and stitching the projection of the image onto the surface of the terrain based on the cost function, so that the color difference on both sides of the stitching line is minimized, so that the integrity can be obtained. Better output image.
进一步的,为了避免点云中非地面点对拼接造成影响(非地面点会导致拼接错位),本实施例在构建代价函数时,还可以考虑将点云中的非地面点排除在外,使得拼接线能够自动避开非地面区域,从而得到视觉效果较好的输出影像。Further, in order to avoid the influence of the non-ground point splicing in the point cloud (the non-ground point may cause the splicing misalignment), in the embodiment, when constructing the cost function, the non-ground point in the point cloud may also be excluded, so that the splicing is performed. The line can automatically avoid non-ground areas, resulting in a better visual output image.
具体的,图4a和图4b是本发明提供的两个相同场景的输出影像示意图,其中图4a是以预估高程面作为投影表面所得到的输出影像,图4b是以点云拟合成形的地形表面作为投影表面所得到的输出影像,二者均采用代价函数的方法进行拼接。如图4a所示,在图4a中由于采用了预估高程面作为投影表面,由于高程面不能够精确的拟合地形表面,因此,图4a的输出图像产生了严重的拼接错位。在图4b中由于采用了点云拟合形成的地形表面作为投影面,能够较精确的拟合出地形表面,且采用代价函数的方法能够使得拼接线两侧的色差最小化,因此得到的输出影像没有出现明显的凭借错位现象,且整个输出影像的整体性较好。因此本发明实施例通过点云拟合地形表面,并采用代价函数的方法进行拼接处理,能够解决输出影像拼接错位的问题。Specifically, FIG. 4a and FIG. 4b are schematic diagrams of output images of two identical scenes provided by the present invention, wherein FIG. 4a is an output image obtained by using an estimated elevation surface as a projection surface, and FIG. 4b is formed by a point cloud fitting. The output image obtained by the terrain surface as the projection surface is spliced by the cost function method. As shown in Fig. 4a, in Fig. 4a, since the estimated elevation surface is used as the projection surface, since the elevation surface cannot accurately fit the terrain surface, the output image of Fig. 4a produces a severe stitching misalignment. In Figure 4b, because the terrain surface formed by point cloud fitting is used as the projection surface, the terrain surface can be fitted more accurately, and the cost function can be used to minimize the chromatic aberration on both sides of the splicing line, so the output is obtained. There is no obvious dislocation phenomenon in the image, and the overall output image is better overall. Therefore, in the embodiment of the present invention, the terrain surface is fitted by the point cloud, and the cost function is used to perform the splicing processing, which can solve the problem of the output image mosaic misalignment.
可选的,为了使整个输出影像具有更好的视觉效果,本实施例在将影像投射到地形表面上之后,还可以基于预设策略对地形表面上的投影进行色彩和亮度的调整。使得在后续拼接的过程中能够得到更好的拼接效果。Optionally, in order to make the entire output image have a better visual effect, after the image is projected onto the terrain surface, the color and brightness of the projection on the terrain surface may be adjusted based on a preset strategy. This enables a better stitching effect in the subsequent stitching process.
示例的,图5a和图5b为本发明实施例提供的两个相同场景下的输出影像示意图,在图5a中地形表面上的投影没有经过色彩和亮度的处理,因此,在图5a中整个输出影像在色彩和亮度方面的整体性不是很好,视 觉效果较差,而在图5b中由于在进行拼接之前对地形表面上的投影进行了亮度和色彩的整体性处理,因此得到的输出图像在色彩和亮度方面的整体性较好,视觉效果较好。因此,本发明实施例通过在拼接处理之前对地形表面上的投影进行色彩和亮度的整体性处理,能够有效提高输出影像的视觉效果。For example, FIG. 5a and FIG. 5b are schematic diagrams of output images in two identical scenes according to an embodiment of the present invention. The projection on the terrain surface in FIG. 5a is not processed by color and brightness, and therefore, the entire output in FIG. 5a The integrity of the image in terms of color and brightness is not very good, depending on The effect is poor, and in Figure 5b, the brightness and color of the projection on the terrain surface are processed before the splicing, so the resulting output image is better in color and brightness, and the visual effect is better. it is good. Therefore, the embodiment of the present invention can effectively improve the visual effect of the output image by performing color and brightness processing on the projection on the terrain surface before the splicing process.
可选的,在本实施例中还可以包括显示输出影像的步骤,其中该输出影像可以是正射影像。由于正射影像具有可测量性,因此能够提供大量的地理信息,尤其是在地震等自然灾害的场景下,以及农业,测绘和交通规划的场景下具有重要作用。Optionally, in this embodiment, the step of displaying an output image may be further included, wherein the output image may be an orthophoto. Because orthophotos are measurable, they can provide a large amount of geographic information, especially in the context of natural disasters such as earthquakes, as well as in agriculture, mapping, and transportation planning.
本实施例提供的输出影像生成方法,通过获取飞行器上搭载的拍摄设备拍摄获得的影像,并基于预设图像处理算法,计算获得拍摄设备在拍摄该影像时的位置和姿态,从而基于拍摄设备在拍摄该影像时的位置和姿态,对该影像进行图像处理和影像拼接处理,获得输出图像。由于在本发明实施例中拍摄设备在拍摄影像时的位置和姿态是通过预设图像处理算法来获得的,因而不需要在飞行器上搭载高精度的GPS和IMU即可获得较为精确的位置和姿态,从而能够在获得拼接较好的输出影像的同时,降低设备成本。The output image generating method provided by the embodiment obtains an image captured by a photographing device mounted on the aircraft, and calculates a position and a posture of the photographing device when the image is captured based on a preset image processing algorithm, so that the photographing device is based on the photographing device. The position and posture of the image are taken, and the image is processed by image processing and image stitching to obtain an output image. Since the position and posture of the photographing device when photographing the image are obtained by the preset image processing algorithm in the embodiment of the present invention, it is not necessary to mount the high-precision GPS and IMU on the aircraft to obtain a more accurate position and posture. Therefore, it is possible to reduce the equipment cost while obtaining a better spliced output image.
另外,由于本实施例可以产生实时的正射影像,相对于现有的非实时正射影像生成的解决方案,本实施例从影像获取到正射影像生成的一体化解决方案极大的提升了作业生产效率,非实时的正射影像生成解决方案通常需要从飞行器将图片导入到计算机并操作软件进行处理,并且通常需要数个小时的处理时间。本实施例提供的正射影像生成方案在飞行器数据采集作业结束后即可获取测区的一个较高精度的正射影像。In addition, since the present embodiment can generate real-time orthophotos, the integrated solution from image acquisition to orthophoto generation is greatly improved in comparison with the existing non-real-time orthophoto generation solution. Job productivity, non-real-time orthophoto generation solutions typically require images to be imported from the aircraft to the computer and manipulated by the software for processing, and typically require several hours of processing time. The orthophoto generation scheme provided in this embodiment can acquire a higher-precision orthophoto of the survey area after the aircraft data acquisition operation ends.
本发明实施例提供一种输出影像生成方法,本实施例中拍摄设备可以被具体为相机。如图6所示,在该方法中:操作人员通过地面站为飞行器设定巡航区域和巡航路线,飞行器基于该巡航路线在巡航区域中飞行作业采集影像,飞行器采集获得的影像以拍照缩略图或码流的形式发送给地面站。地面站在接收到该影像后,初始化SLAM算法并生成拍摄场景的初始半稠密点,进一步的,再通过SLAM算法,计算获得相机在拍摄该影像时的位置和姿态,并采用密集匹配的方法生成影像的半稠密点。在更新拍摄 场景的半稠密点之后,地面站基于该些半稠密点拟合地形表面,并将接收到的影像按照拍摄时的位置和姿态投影到拟合的地形表面上。再基于拼接线两侧色彩差异最小的原则构建代价函数,通过代价函数寻找最优拼接线对影像在地形表面上的投影进行拼接。进一步的,地面站还可以根据飞行器巡航的时间或者飞行器传回的影像数量确定飞行器是否已经完成了影像采集,若是完成了,则基于SLAM算法对获取到的影像所对应的相机位置、姿态,以及半稠密点云进行优化,获得符合预设精度要求的位置和姿态,以及符合预设质量要求的点云。进一步的,地面站对优化后的点云进行分类,将点云划分为地面点和非地面点,并基于划分得到的地面点重新拟合地形表面,将影像重新投影到重新拟合的地形表面上。在此之后,为了获得更好的视觉效果,地面站还可以对地形表面上的投影进行全局的色彩调整,以保证色彩的一致性,进一步再通过构建代价函数使得在选取拼接线时能够自动绕过非地面点(比如建筑等地物),这样最终得到的输出影像就不会出现错位的问题,具有较好的视觉效果。The embodiment of the present invention provides an output image generating method. In this embodiment, the photographing device may be specifically a camera. As shown in FIG. 6 , in the method, an operator sets a cruise area and a cruise route for the aircraft through the ground station, and the aircraft collects images according to the flight route in the cruise area, and the aircraft collects the obtained images to take a thumbnail or The code stream is sent to the ground station. After receiving the image, the ground station initializes the SLAM algorithm and generates the initial semi-dense point of the shooting scene. Further, through the SLAM algorithm, the position and posture of the camera when the image is captured are calculated and generated by dense matching. The semi-dense point of the image. In the update shooting After the semi-dense point of the scene, the ground station fits the terrain surface based on the semi-dense points and projects the received image onto the fitted terrain surface according to the position and attitude at the time of shooting. Then the cost function is constructed based on the principle of minimum color difference on both sides of the stitching line. The cost function is used to find the optimal stitching line to splicing the image on the terrain surface. Further, the ground station may further determine whether the aircraft has completed image acquisition according to the time of the aircraft cruise or the number of images returned by the aircraft, and if so, the camera position and posture corresponding to the acquired image based on the SLAM algorithm, and The semi-dense point cloud is optimized to obtain a position and attitude that meets the preset accuracy requirements, as well as a point cloud that meets the preset quality requirements. Further, the ground station classifies the optimized point cloud, divides the point cloud into ground points and non-ground points, and re-fitting the terrain surface based on the divided ground points to re-project the image onto the re-fitted terrain surface. on. After that, in order to obtain better visual effects, the ground station can also perform global color adjustment on the projection on the surface of the terrain to ensure color consistency, and further construct a cost function to automatically wrap around when selecting the stitching line. Passing through non-ground points (such as buildings and other objects), the resulting output image will not have the problem of misalignment, and has a good visual effect.
本实施例提供的方法,其具体执行方式和有益效果与图1实施例类似,在这里不再赘述。The specific implementation manner and beneficial effects of the method provided in this embodiment are similar to those in the embodiment of FIG. 1, and are not described herein again.
本发明实施例提供一种地面站,该地面站可以是上述实施例所述的地面站。图7为本发明实施例提供的地面站的结构示意图,如图7所示,地面站10包括:通信接口11、一个或多个处理器12;一个或多个处理器单独或协同工作,通信接口11和处理器12连接;通信接口11用于:获取飞行器上搭载的拍摄设备拍摄获得的影像;处理器12用于:基于预设图像处理算法,计算获得所述拍摄设备在拍摄所述影像时的位置和姿态;处理器12还用于:基于所述位置和所述姿态,对所述影像进行投影处理和影像拼接处理,获得输出影像。The embodiment of the invention provides a ground station, which may be the ground station described in the above embodiment. FIG. 7 is a schematic structural diagram of a ground station according to an embodiment of the present invention. As shown in FIG. 7, the ground station 10 includes: a communication interface 11, one or more processors 12; and one or more processors work independently or in cooperation. The interface 11 is connected to the processor 12; the communication interface 11 is configured to: acquire an image captured by a photographing device mounted on the aircraft; and the processor 12 is configured to: obtain, according to a preset image processing algorithm, the photographing device to capture the image The position and posture of the time; the processor 12 is further configured to: perform projection processing and image stitching processing on the image to obtain an output image based on the position and the posture.
可选的,所述通信接口11用于:获取飞行器上搭载的拍摄设备拍摄获得的影像的码流数据。Optionally, the communication interface 11 is configured to: acquire code stream data of an image captured by a photographing device mounted on an aircraft.
可选的,所述通信接口11用于:获取飞行器上搭载的拍摄设备拍摄获得的影像的缩略图。Optionally, the communication interface 11 is configured to: acquire a thumbnail of an image captured by a photographing device mounted on the aircraft.
可选的,所述地面站还包括显示组件13,所述显示组件13与所述处 理器12通信连接;所述显示组件13用于:显示获取到的所述缩略图。Optionally, the ground station further includes a display component 13, the display component 13 and the location The processor 12 is communicatively coupled; the display component 13 is configured to: display the acquired thumbnail.
可选的,所述通信接口11还用于:获取所述拍摄设备在拍摄所述影像时的GPS信息;所述处理器12还用于:基于所述影像对应的所述GPS信息,将所述位置转换为世界坐标系下的位置,将所述姿态转换为世界坐标系下的姿态。Optionally, the communication interface 11 is further configured to: acquire GPS information of the photographing device when the image is captured; the processor 12 is further configured to: based on the GPS information corresponding to the image, The position is converted to a position in the world coordinate system, and the posture is converted into a posture in the world coordinate system.
可选的,所述处理器12用于:基于即时定位与地图构建SLAM算法,计算所述拍摄设备在拍摄所述影像时的位置和姿态。Optionally, the processor 12 is configured to: calculate a position and a posture of the photographing device when the image is captured, based on an instant positioning and map construction SLAM algorithm.
可选的,所述处理器12用于:构建代价函数,基于所述代价函数对所述影像投射到所述地形表面上的投影进行拼接处理。Optionally, the processor 12 is configured to: construct a cost function, and perform splicing processing on the projection of the image onto the surface of the terrain based on the cost function.
可选的,所述处理器12还用于:对所述点云进行优化处理,获得符合预设质量条件的点云;所述处理器12用于:基于优化后的点云拟合形成地形表面。Optionally, the processor 12 is further configured to: perform optimization processing on the point cloud to obtain a point cloud that meets a preset quality condition; and the processor 12 is configured to: form a terrain based on the optimized point cloud fitting surface.
可选的,所述处理器12用于:从优化后的点云中提取地面点;基于所述地面点拟合地形表面。Optionally, the processor 12 is configured to: extract a ground point from the optimized point cloud; and fit the terrain surface based on the ground point.
可选的,所述处理器12用于:对所述位置、所述姿态进行优化处理,获得符合预设精度条件的位置和姿态。Optionally, the processor 12 is configured to: perform optimization processing on the position and the posture, and obtain a position and a posture that meet a preset accuracy condition.
可选的,所述处理器12用于:基于预设策略对所述影像在所述地形表面上的投影进行色彩和/或亮度的调整。Optionally, the processor 12 is configured to: perform color and/or brightness adjustment on a projection of the image on the terrain surface based on a preset policy.
可选的,所述输出影像包括正射影像。Optionally, the output image includes an orthophoto.
可选的,显示组件13用于显示所述正射影像。Optionally, the display component 13 is configured to display the orthophoto.
可选的,当所述飞行器相对于地表以固定的相对高度飞行时,所述处理器12用于:控制所述飞行器的拍摄设备在水平方向上以相同的拍摄间隔进行拍摄Optionally, when the aircraft is flying at a fixed relative height with respect to the ground surface, the processor 12 is configured to: control the shooting device of the aircraft to shoot at the same shooting interval in the horizontal direction.
可选的,当所述飞行器相对于地表高度改变时,所述处理器12用于:控制所述飞行器的拍摄设备改变拍摄间隔进行拍摄。Optionally, when the aircraft is changed in height relative to the surface, the processor 12 is configured to: control the shooting device of the aircraft to change the shooting interval for shooting.
可选的,当所述飞行器以统一的绝对高度飞行时,所述处理器12用于:控制所述飞行器的拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影像重叠率,以及所述飞行器与地表的相对高度关联。Optionally, when the aircraft is flying at a uniform absolute height, the processor 12 is configured to: the photographing device that controls the aircraft is photographed in a horizontal direction at a time-lapse photographing interval, wherein the photographing interval is The overlap rate with the pre-configured image and the relative height of the aircraft to the surface.
本实施例提供的地面站能够执行图1实施例的技术方案,其执行方式 和有益效果类似,在这里不再赘述。The ground station provided in this embodiment can perform the technical solution of the embodiment of FIG. 1 and its execution manner Similar to the beneficial effects, it will not be described here.
本发明实施例还提供一种地面站,该地面站在图7实施例的基础上,处理器12用于:基于所述位置和所述姿态,计算获得所述影像的半稠密或稠密点云,或者基于SLAM算法,计算获得所述影像的稀疏点云;基于计算获得的点云拟合地形表面;基于所述影像的位置和姿态,将所述影像投射到所述地形表面。The embodiment of the present invention further provides a ground station. The ground station is based on the embodiment of FIG. 7. The processor 12 is configured to: calculate a semi-dense or dense point cloud of the image based on the position and the posture. Or calculating a sparse point cloud of the image based on the SLAM algorithm; fitting the terrain cloud based on the calculated point cloud; and projecting the image onto the terrain surface based on the position and orientation of the image.
本实施例提供的地面站能够执行图3实施例的技术方案,其执行方式和有益效果类似,在这里不再赘述。The ground station provided by this embodiment can perform the technical solution of the embodiment of FIG. 3, and the execution manner and the beneficial effects are similar, and details are not described herein again.
本发明实施例提供一种控制器。参见图8,图8为本发明实施例提供的控制器的结构示意图,如图8所示,控制器20包括:通信接口21、一个或多个处理器22;一个或多个处理器单独或协同工作,通信接口21和处理器22连接;通信接口21用于:获取飞行器上搭载的拍摄设备拍摄获得的影像;处理器22用于:基于预设图像处理算法,计算获得所述拍摄设备在拍摄所述影像时的位置和姿态;处理器22还用于:基于所述位置和所述姿态,对所述影像进行投影处理和影像拼接处理,获得输出影像。The embodiment of the invention provides a controller. Referring to FIG. 8, FIG. 8 is a schematic structural diagram of a controller according to an embodiment of the present invention. As shown in FIG. 8, the controller 20 includes: a communication interface 21, one or more processors 22; and one or more processors alone or Working in cooperation, the communication interface 21 is connected to the processor 22; the communication interface 21 is configured to: acquire an image captured by a photographing device mounted on the aircraft; and the processor 22 is configured to: calculate, according to a preset image processing algorithm, the photographing device The position and posture of the image is taken; the processor 22 is further configured to: perform projection processing and image stitching processing on the image based on the position and the posture to obtain an output image.
可选的,所述通信接口21用于:获取飞行器上搭载的拍摄设备拍摄获得的影像的码流数据。Optionally, the communication interface 21 is configured to: acquire code stream data of an image captured by a photographing device mounted on the aircraft.
可选的,所述通信接口21用于:获取飞行器上搭载的拍摄设备拍摄获得的影像的缩略图。Optionally, the communication interface 21 is configured to: acquire a thumbnail of an image captured by a photographing device mounted on the aircraft.
可选的,所述通信接口21还用于:获取所述拍摄设备在拍摄所述影像时的GPS信息;所述处理器22还用于:基于所述影像对应的所述GPS信息,将所述位置转换为世界坐标系下的位置,将所述姿态转换为世界坐标系下的姿态。Optionally, the communication interface 21 is further configured to: acquire GPS information when the imaging device captures the image; and the processor 22 is further configured to: based on the GPS information corresponding to the image, The position is converted to a position in the world coordinate system, and the posture is converted into a posture in the world coordinate system.
可选的,所述处理器22用于:基于即时定位与地图构建SLAM算法,计算所述拍摄设备在拍摄所述影像时的位置和姿态。Optionally, the processor 22 is configured to: calculate a position and a posture of the photographing device when the image is captured based on a real-time positioning and map construction SLAM algorithm.
可选的,所述处理器22用于:构建代价函数,基于所述代价函数对所述影像投射到所述地形表面上的投影进行拼接处理。Optionally, the processor 22 is configured to: construct a cost function, and perform splicing processing on the projection of the image onto the surface of the terrain based on the cost function.
可选的,所述处理器22还用于:对所述点云进行优化处理,获得符 合预设质量条件的点云;所述处理器22用于:基于优化后的点云拟合形成地形表面。Optionally, the processor 22 is further configured to: perform optimization processing on the point cloud to obtain a character a point cloud in combination with a preset quality condition; the processor 22 is configured to form a terrain surface based on the optimized point cloud fitting.
可选的,所述处理器22用于:从优化后的点云中提取地面点;基于所述地面点拟合地形表面。Optionally, the processor 22 is configured to: extract a ground point from the optimized point cloud; and fit the terrain surface based on the ground point.
可选的,所述处理器22用于:对所述位置、所述姿态进行优化处理,获得符合预设精度条件的位置和姿态。Optionally, the processor 22 is configured to: perform optimization processing on the position and the posture, and obtain a position and a posture that meet a preset accuracy condition.
可选的,所述处理器22用于:基于预设策略对所述影像在所述地形表面上的投影进行色彩和/或亮度的调整。Optionally, the processor 22 is configured to perform color and/or brightness adjustment on a projection of the image on the terrain surface based on a preset policy.
可选的,所述输出影像包括正射影像。Optionally, the output image includes an orthophoto.
可选的,当所述飞行器相对于地表以固定的相对高度飞行时,所述处理器22用于:控制所述飞行器的拍摄设备在水平方向上以相同的拍摄间隔进行拍摄Optionally, when the aircraft is flying at a fixed relative height with respect to the ground surface, the processor 22 is configured to: control the shooting device of the aircraft to shoot at the same shooting interval in the horizontal direction.
可选的,当所述飞行器相对于地表高度改变时,所述处理器22用于:控制所述飞行器的拍摄设备改变拍摄间隔进行拍摄。Optionally, when the aircraft is changed in height relative to the surface, the processor 22 is configured to: control the shooting device of the aircraft to change the shooting interval for shooting.
可选的,当所述飞行器以统一的绝对高度飞行时,所述处理器22用于:控制所述飞行器的拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影像重叠率,以及所述飞行器与地表的相对高度关联。Optionally, when the aircraft is flying at a uniform absolute height, the processor 22 is configured to: the photographing device that controls the aircraft is photographed in a horizontal direction at a time-lapse photographing interval, wherein the photographing interval is The overlap rate with the pre-configured image and the relative height of the aircraft to the surface.
本实施例提供的控制器能够执行图1实施例的技术方案,其执行方式和有益效果类似,在这里不再赘述The controller provided in this embodiment can perform the technical solution of the embodiment of FIG. 1 , and the execution manner and the beneficial effects are similar, and details are not described herein again.
本发明实施例还提供一种控制器,该控制器在图8实施例的基础上,处理器22用于:基于所述位置和所述姿态,计算获得所述影像的半稠密或稠密点云,或者基于SLAM算法,计算获得所述影像的稀疏点云;基于计算获得的点云拟合地形表面;基于所述影像的位置和姿态,将所述影像投射到所述地形表面。The embodiment of the present invention further provides a controller, based on the embodiment of FIG. 8, the processor 22 is configured to: calculate a semi-dense or dense point cloud of the image based on the position and the posture. Or calculating a sparse point cloud of the image based on the SLAM algorithm; fitting the terrain cloud based on the calculated point cloud; and projecting the image onto the terrain surface based on the position and orientation of the image.
本实施例提供的控制器能够执行图3实施例的技术方案,其执行方式和有益效果类似,在这里不再赘述。The controller provided in this embodiment can perform the technical solution of the embodiment of FIG. 3, and the execution manner and the beneficial effects are similar, and details are not described herein again.
本发明实施例提供一种计算机可读存储介质,包括指令,当其在计算 机上运行时,使得计算机执行上述实施例提供的输出影像生成方法。Embodiments of the present invention provide a computer readable storage medium including instructions when it is in a calculation When the machine is running, the computer is caused to execute the output image generating method provided by the above embodiment.
本发明实施例提供一种无人机。该无人机包括机身;动力系统,安装在所述机身,用于提供飞行动力;拍摄设备,安装在所述机身,用于拍摄影像;以及如上述实施例所述的控制器。Embodiments of the present invention provide a drone. The drone includes a fuselage; a power system mounted to the body for providing flight power; a photographing device mounted to the body for capturing an image; and a controller as described in the above embodiments.
其中,本实施例提供的无人机,其执行方式和有益效果与上述实施例所涉及的控制器相同,在这里不再赘述。The execution mode and the beneficial effects of the unmanned aerial vehicle provided in this embodiment are the same as those of the controller in the foregoing embodiment, and are not described herein again.
在本发明所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner, for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本发明各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium. The above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps. The foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上 述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and simplicity of the description, only the above The division of each functional module is illustrated. In practical applications, the above function assignment can be completed by different functional modules as needed, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. . For the specific working process of the device described above, refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。 Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, and are not intended to be limiting; although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that The technical solutions described in the foregoing embodiments may be modified, or some or all of the technical features may be equivalently replaced; and the modifications or substitutions do not deviate from the technical solutions of the embodiments of the present invention. range.

Claims (51)

  1. 一种输出影像生成方法,其特征在于,包括:An output image generating method, comprising:
    获取飞行器上搭载的拍摄设备拍摄获得的影像;Obtaining images obtained by shooting equipment carried on the aircraft;
    基于预设图像处理算法,计算获得所述拍摄设备在拍摄所述影像时的位置和姿态;Calculating a position and a posture of the photographing device when the image is captured based on a preset image processing algorithm;
    基于所述位置和所述姿态,对所述影像进行投影处理和影像拼接处理,获得输出影像。Based on the position and the posture, the image is subjected to projection processing and image stitching processing to obtain an output image.
  2. 根据权利要求1所述的方法,其特征在于,所述获取飞行器上搭载的拍摄设备拍摄获得的影像,包括:The method according to claim 1, wherein the acquiring an image obtained by the photographing device mounted on the aircraft comprises:
    获取飞行器上搭载的拍摄设备拍摄获得的影像的码流数据。Obtain the code stream data of the image captured by the shooting device mounted on the aircraft.
  3. 根据权利要求1所述的方法,其特征在于,所述获取飞行器上搭载的拍摄设备拍摄获得的影像,包括:The method according to claim 1, wherein the acquiring an image obtained by the photographing device mounted on the aircraft comprises:
    获取飞行器上搭载的拍摄设备拍摄获得的影像的缩略图。Obtain a thumbnail of the image captured by the shooting device mounted on the aircraft.
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:The method of claim 3, wherein the method further comprises:
    显示获取到的所述缩略图。The obtained thumbnail is displayed.
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1 further comprising:
    获取所述拍摄设备在拍摄所述影像时的GPS信息;Obtaining GPS information of the photographing device when the image is captured;
    所述基于预设图像处理算法,计算获得所述拍摄设备在拍摄所述影像时的位置和姿态之后,所述方法还包括:The method further includes: calculating, according to a preset image processing algorithm, a position and a posture of the photographing device when the image is captured, the method further comprising:
    基于所述影像对应的所述GPS信息,将所述位置转换为世界坐标系下的位置,将所述姿态转换为世界坐标系下的姿态。Based on the GPS information corresponding to the image, the position is converted into a position in a world coordinate system, and the posture is converted into a posture in a world coordinate system.
  6. 根据权利要求1所述的方法,其特征在于,所述基于预设图像处理算法,计算获得所述拍摄设备在拍摄所述影像时的位置和姿态,包括:The method according to claim 1, wherein the calculating the position and posture of the photographing device when the image is captured based on a preset image processing algorithm comprises:
    基于即时定位与地图构建SLAM算法,计算所述拍摄设备在拍摄所述影像时的位置和姿态。Based on the real-time positioning and map construction SLAM algorithm, the position and posture of the photographing device when the image is captured are calculated.
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述基于所述位置和所述姿态,对所述影像进行投影处理,包括:The method according to any one of claims 1 to 6, wherein the projecting the image based on the position and the posture comprises:
    基于所述位置和所述姿态,计算获得所述影像的半稠密或稠密点云,或者基于SLAM算法,计算获得所述影像的稀疏点云;Obtaining a semi-dense or dense point cloud of the image based on the position and the posture, or calculating a sparse point cloud of the image based on a SLAM algorithm;
    基于计算获得的点云拟合地形表面; A point cloud fitted to the terrain surface based on the calculation;
    基于所述影像的位置和姿态,将所述影像投射到所述地形表面。The image is projected onto the terrain surface based on the position and orientation of the image.
  8. 根据权利要求7所述的方法,其特征在于,所述基于所述位置和所述姿态,对所述影像进行影像拼接处理,包括:The method according to claim 7, wherein the image splicing processing on the image based on the position and the posture comprises:
    构建代价函数,基于所述代价函数对所述影像投射到所述地形表面上的投影进行拼接处理。A cost function is constructed to stitch the projections of the image onto the surface of the terrain based on the cost function.
  9. 根据权利要求7所述的方法,其特征在于,所述基于所述位置和所述姿态,计算获得所述影像的半稠密或稠密点云,或者基于SLAM算法,计算获得所述影像的稀疏点云之后,所述方法还包括:The method according to claim 7, wherein the calculating a semi-dense or dense point cloud of the image based on the position and the posture, or calculating a sparse point of the image based on a SLAM algorithm After the cloud, the method further includes:
    对所述点云进行优化处理,获得符合预设质量条件的点云;Optimizing the point cloud to obtain a point cloud that meets preset quality conditions;
    所述基于计算获得的点云拟合地形表面,包括:The point cloud based on the calculation is fitted to the terrain surface, including:
    基于优化后的点云拟合形成地形表面。The terrain surface is formed based on the optimized point cloud fitting.
  10. 根据权利要求9所述的方法,其特征在于,所述基于优化后的点云拟合形成地形表面,包括:The method according to claim 9, wherein the forming a terrain surface based on the optimized point cloud fitting comprises:
    从优化后的点云中提取地面点;Extract ground points from the optimized point cloud;
    基于所述地面点拟合地形表面。The terrain surface is fitted based on the ground point.
  11. 根据权利要求7所述的方法,其特征在于,所述基于所述位置和所述姿态,计算获得所述影像的半稠密或稠密点云,或者基于SLAM算法,计算获得所述影像的稀疏点云之后,所述方法还包括:The method according to claim 7, wherein the calculating a semi-dense or dense point cloud of the image based on the position and the posture, or calculating a sparse point of the image based on a SLAM algorithm After the cloud, the method further includes:
    对所述位置、所述姿态进行优化处理,获得符合预设精度条件的位置和姿态。The position and the posture are optimized to obtain a position and a posture that meet the preset accuracy condition.
  12. 根据权利要求7所述的方法,其特征在于,所述基于所述影像的位置和姿态,将所述影像投射到所述地形表面之后,所述方法还包括:The method according to claim 7, wherein after the image is projected onto the terrain surface based on the position and orientation of the image, the method further comprises:
    基于预设策略对所述影像在所述地形表面上的投影进行色彩和/或亮度的调整。The color and/or brightness adjustment of the projection of the image on the terrain surface is performed based on a preset strategy.
  13. 根据权利要求1-12中任一项所述的方法,其特征在于,所述输出影像包括正射影像。The method of any of claims 1-12, wherein the output image comprises an orthophoto.
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:The method of claim 13 wherein the method further comprises:
    显示所述正射影像。The orthophoto is displayed.
  15. 根据权利要求1所述的方法,其特征在于,当所述飞行器相对于地表以固定的相对高度飞行时,所述拍摄设备在水平方向上以相同的拍摄 间隔进行拍摄。The method according to claim 1, wherein said photographing apparatus photographs in the same direction in the horizontal direction when said aircraft is flying at a fixed relative height with respect to the earth's surface. Shoot at intervals.
  16. 根据权利要求1所述的方法,其特征在于,当所述飞行器相对于地表高度改变时,所述拍摄设备的拍摄间隔改变。The method of claim 1 wherein the photographing interval of the photographing device changes when the aircraft is changed in height relative to the surface.
  17. 根据权利要求16所述的方法,其特征在于,当所述飞行器以统一的绝对高度飞行时,所述拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影像重叠率,以及所述飞行器与地表的相对高度关联。The method according to claim 16, wherein when the aircraft is flying at a uniform absolute height, the photographing apparatus performs photographing at a time-varying photographing interval in a horizontal direction, wherein the photographing interval is in advance The configured image overlap rate and the relative height of the aircraft to the surface.
  18. 一种地面站,其特征在于,包括:通信接口、一个或多个处理器;所述一个或多个处理器单独或协同工作,所述通信接口和所述处理器连接;A ground station, comprising: a communication interface, one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
    所述通信接口用于:获取飞行器上搭载的拍摄设备拍摄获得的影像;The communication interface is configured to: acquire an image captured by a photographing device mounted on the aircraft;
    所述处理器用于:基于预设图像处理算法,计算获得所述拍摄设备在拍摄所述影像时的位置和姿态;The processor is configured to: obtain, according to a preset image processing algorithm, a position and a posture of the photographing device when the image is captured;
    所述处理器还用于:基于所述位置和所述姿态,对所述影像进行投影处理和影像拼接处理,获得输出影像。The processor is further configured to perform a projection process and an image stitching process on the image to obtain an output image based on the position and the posture.
  19. 根据权利要求18所述的地面站,其特征在于,所述通信接口用于:获取飞行器上搭载的拍摄设备拍摄获得的影像的码流数据。The ground station according to claim 18, wherein the communication interface is configured to: acquire code stream data of an image captured by a photographing device mounted on the aircraft.
  20. 根据权利要求18所述的地面站,其特征在于,所述通信接口用于:获取飞行器上搭载的拍摄设备拍摄获得的影像的缩略图。The ground station according to claim 18, wherein the communication interface is configured to: acquire a thumbnail of an image captured by a photographing device mounted on the aircraft.
  21. 根据权利要求20所述的地面站,其特征在于,所述地面站还包括显示组件,所述显示组件与所述处理器通信连接;The ground station of claim 20, wherein said ground station further comprises a display component, said display component being communicatively coupled to said processor;
    所述显示组件用于:显示获取到的所述缩略图。The display component is configured to: display the acquired thumbnail.
  22. 根据权利要求18所述的地面站,其特征在于,所述通信接口还用于:获取所述拍摄设备在拍摄所述影像时的GPS信息;The ground station according to claim 18, wherein the communication interface is further configured to: acquire GPS information of the photographing device when the image is captured;
    所述处理器还用于:基于所述影像对应的所述GPS信息,将所述位置转换为世界坐标系下的位置,将所述姿态转换为世界坐标系下的姿态。The processor is further configured to: convert the position into a position in a world coordinate system based on the GPS information corresponding to the image, and convert the posture into a posture in a world coordinate system.
  23. 根据权利要求18所述的地面站,其特征在于,所述处理器用于:基于即时定位与地图构建SLAM算法,计算所述拍摄设备在拍摄所述影像时的位置和姿态。The ground station according to claim 18, wherein the processor is configured to: calculate a position and a posture of the photographing device when the image is captured based on an instant positioning and map construction SLAM algorithm.
  24. 根据权利要求18-23中任一项所述的地面站,其特征在于,所述 处理器用于:A ground station according to any one of claims 18 to 23, wherein said said The processor is used to:
    基于所述位置和所述姿态,计算获得所述影像的半稠密或稠密点云,或者基于SLAM算法,计算获得所述影像的稀疏点云;Obtaining a semi-dense or dense point cloud of the image based on the position and the posture, or calculating a sparse point cloud of the image based on a SLAM algorithm;
    基于计算获得的点云拟合地形表面;A point cloud fitted to the terrain surface based on the calculation;
    基于所述影像的位置和姿态,将所述影像投射到所述地形表面。The image is projected onto the terrain surface based on the position and orientation of the image.
  25. 根据权利要求24所述的地面站,其特征在于,所述处理器用于:构建代价函数,基于所述代价函数对所述影像投射到所述地形表面上的投影进行拼接处理。The ground station according to claim 24, wherein said processor is configured to: construct a cost function, and splicing said projection of said image onto said surface of said terrain based on said cost function.
  26. 根据权利要求24所述的地面站,其特征在于,所述处理器还用于:The ground station of claim 24, wherein the processor is further configured to:
    对所述点云进行优化处理,获得符合预设质量条件的点云;Optimizing the point cloud to obtain a point cloud that meets preset quality conditions;
    所述处理器用于:基于优化后的点云拟合形成地形表面。The processor is configured to form a terrain surface based on the optimized point cloud fitting.
  27. 根据权利要求26所述的地面站,其特征在于,所述处理器用于:The ground station of claim 26 wherein said processor is operative to:
    从优化后的点云中提取地面点;Extract ground points from the optimized point cloud;
    基于所述地面点拟合地形表面。The terrain surface is fitted based on the ground point.
  28. 根据权利要求24所述的地面站,其特征在于,所述处理器用于:The ground station of claim 24 wherein said processor is operative to:
    对所述位置、所述姿态进行优化处理,获得符合预设精度条件的位置和姿态。The position and the posture are optimized to obtain a position and a posture that meet the preset accuracy condition.
  29. 根据权利要求24所述的地面站,其特征在于,所述处理器用于:基于预设策略对所述影像在所述地形表面上的投影进行色彩和/或亮度的调整。The ground station according to claim 24, wherein the processor is configured to perform color and/or brightness adjustment on a projection of the image on the terrain surface based on a preset policy.
  30. 根据权利要求18-29中任一项所述的地面站,其特征在于,所述输出影像包括正射影像。A ground station according to any of claims 18-29, wherein the output image comprises an orthophoto.
  31. 根据权利要求30所述的地面站,其特征在于,显示组件用于显示所述正射影像。The ground station of claim 30 wherein the display component is for displaying the orthophoto.
  32. 根据权利要求18所述的地面站,其特征在于,当所述飞行器相对于地表以固定的相对高度飞行时,所述处理器用于:控制所述飞行器的拍摄设备在水平方向上以相同的拍摄间隔进行拍摄。The ground station according to claim 18, wherein said processor is configured to: control a photographing device of said aircraft to shoot in the same direction in a horizontal direction when said aircraft is flying at a fixed relative height with respect to the earth's surface Shoot at intervals.
  33. 根据权利要求18所述的地面站,其特征在于,当所述飞行器相对于地表高度改变时,所述处理器用于:控制所述飞行器的拍摄设备改变 拍摄间隔进行拍摄。The ground station according to claim 18, wherein said processor is configured to: control a change of a photographing device of said aircraft when said aircraft is changed in height relative to a surface Shooting at the shooting interval.
  34. 根据权利要求33所述的地面站,其特征在于,当所述飞行器以统一的绝对高度飞行时,所述处理器用于:控制所述飞行器的拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影像重叠率,以及所述飞行器与地表的相对高度关联。The ground station according to claim 33, wherein when the aircraft is flying at a uniform absolute altitude, the processor is configured to: control a photographing device of the aircraft to perform a time-lapse photographing interval in a horizontal direction Shooting, wherein the shooting interval is associated with a pre-configured image overlap rate and a relative height of the aircraft to the surface.
  35. 一种飞行器的控制器,其特征在于,通信接口、一个或多个处理器;所述一个或多个处理器单独或协同工作,所述通信接口和所述处理器连接;A controller for an aircraft, characterized by a communication interface, one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
    所述通信接口用于:获取飞行器上搭载的拍摄设备拍摄获得的影像;The communication interface is configured to: acquire an image captured by a photographing device mounted on the aircraft;
    所述处理器用于:基于预设图像处理算法,计算获得所述拍摄设备在拍摄所述影像时的位置和姿态;The processor is configured to: obtain, according to a preset image processing algorithm, a position and a posture of the photographing device when the image is captured;
    所述处理器还用于:基于所述位置和所述姿态,对所述影像进行投影处理和影像拼接处理,获得输出影像。The processor is further configured to perform a projection process and an image stitching process on the image to obtain an output image based on the position and the posture.
  36. 根据权利要求35所述的控制器,其特征在于,所述通信接口用于:获取飞行器上搭载的拍摄设备拍摄获得的影像的码流数据。The controller according to claim 35, wherein the communication interface is configured to: acquire code stream data of an image captured by a photographing device mounted on the aircraft.
  37. 根据权利要求35所述的控制器,其特征在于,所述通信接口用于:获取飞行器上搭载的拍摄设备拍摄获得的影像的缩略图。The controller according to claim 35, wherein the communication interface is configured to: acquire a thumbnail of an image captured by a photographing device mounted on the aircraft.
  38. 根据权利要求35所述的控制器,其特征在于,所述通信接口还用于:获取所述拍摄设备在拍摄所述影像时的GPS信息;The controller according to claim 35, wherein the communication interface is further configured to: acquire GPS information of the photographing device when the image is captured;
    所述处理器还用于:基于所述影像对应的所述GPS信息,将所述位置转换为世界坐标系下的位置,将所述姿态转换为世界坐标系下的姿态。The processor is further configured to: convert the position into a position in a world coordinate system based on the GPS information corresponding to the image, and convert the posture into a posture in a world coordinate system.
  39. 根据权利要求35所述的控制器,其特征在于,所述处理器用于:基于即时定位与地图构建SLAM算法,计算所述拍摄设备在拍摄所述影像时的位置和姿态。The controller according to claim 35, wherein the processor is configured to: calculate a position and a posture of the photographing device when the image is captured based on an instant positioning and map construction SLAM algorithm.
  40. 根据权利要求35-39中任一项所述的控制器,其特征在于,所述处理器用于:The controller according to any one of claims 35 to 39, wherein the processor is configured to:
    基于所述位置和所述姿态,计算获得所述影像的半稠密或稠密点云,或者基于SLAM算法,计算获得所述影像的稀疏点云;Obtaining a semi-dense or dense point cloud of the image based on the position and the posture, or calculating a sparse point cloud of the image based on a SLAM algorithm;
    基于计算获得的点云拟合地形表面;A point cloud fitted to the terrain surface based on the calculation;
    基于所述影像的位置和姿态,将所述影像投射到所述地形表面。 The image is projected onto the terrain surface based on the position and orientation of the image.
  41. 根据权利要求40所述的控制器,其特征在于,所述处理器用于:构建代价函数,基于所述代价函数对所述影像投射到所述地形表面上的投影进行拼接处理。The controller according to claim 40, wherein said processor is configured to: construct a cost function, and perform splicing processing on said projection of said image onto said surface of said terrain based on said cost function.
  42. 根据权利要求40所述的控制器,其特征在于,所述处理器还用于:The controller of claim 40, wherein the processor is further configured to:
    对所述点云进行优化处理,获得符合预设质量条件的点云;Optimizing the point cloud to obtain a point cloud that meets preset quality conditions;
    所述处理器用于:基于优化后的点云拟合形成地形表面。The processor is configured to form a terrain surface based on the optimized point cloud fitting.
  43. 根据权利要求42所述的控制器,其特征在于,所述处理器用于:The controller of claim 42 wherein said processor is operative to:
    从优化后的点云中提取地面点;Extract ground points from the optimized point cloud;
    基于所述地面点拟合地形表面。The terrain surface is fitted based on the ground point.
  44. 根据权利要求40所述的控制器,其特征在于,所述处理器用于:The controller of claim 40 wherein said processor is operative to:
    对所述位置、所述姿态进行优化处理,获得符合预设精度条件的位置和姿态。The position and the posture are optimized to obtain a position and a posture that meet the preset accuracy condition.
  45. 根据权利要求40所述的控制器,其特征在于,所述处理器用于:基于预设策略对所述影像在所述地形表面上的投影进行色彩和/或亮度的调整。The controller according to claim 40, wherein the processor is configured to perform color and/or brightness adjustment on a projection of the image on the terrain surface based on a preset policy.
  46. 根据权利要求35-45中任一项所述的控制器,其特征在于,所述输出影像包括正射影像。The controller of any of claims 35-45, wherein the output image comprises an orthophoto.
  47. 根据权利要求35所述的控制器,其特征在于,当所述飞行器相对于地表以固定的相对高度飞行时,所述处理器用于:控制所述拍摄设备在水平方向上以相同的拍摄间隔进行拍摄。The controller according to claim 35, wherein said processor is configured to: control said photographing apparatus to perform at the same photographing interval in a horizontal direction when said aircraft is flying at a fixed relative height with respect to the earth's surface. Shooting.
  48. 根据权利要求35所述的控制器,其特征在于,当所述飞行器相对于地表高度改变时,所述处理器用于:控制所述拍摄设备改变拍摄间隔进行拍摄。The controller according to claim 35, wherein said processor is adapted to: control said photographing device to change a photographing interval for photographing when said aircraft is changed in height relative to a surface.
  49. 根据权利要求48所述的控制器,其特征在于,当所述飞行器以统一的绝对高度飞行时,所述处理器用于:控制所述拍摄设备在水平方向上以时变的拍摄间隔进行拍摄,其中,所述拍摄间隔与预先配置的影像重叠率,以及所述飞行器与地表的相对高度关联。The controller according to claim 48, wherein when the aircraft is flying at a uniform absolute height, the processor is configured to: control the photographing device to shoot in a horizontal direction at a time-lapse shooting interval, Wherein, the shooting interval is associated with a pre-configured image overlap rate and a relative height of the aircraft and the surface.
  50. 一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行如权利要求1-17中任一项所述的输出影像生成方法。 A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the output image generation method of any of claims 1-17.
  51. 一种无人机,其特征在于,包括:A drone, characterized in that it comprises:
    机身;body;
    动力系统,安装在所述机身,用于提供飞行动力;a power system mounted to the fuselage for providing flight power;
    拍摄设备,安装在所述机身,用于拍摄影像;a photographing device mounted on the body for capturing an image;
    以及如权利要求35-49中任一项所述的控制器。 And a controller according to any of claims 35-49.
PCT/CN2017/112189 2017-11-21 2017-11-21 Method, device, and unmanned aerial vehicle for generating output image WO2019100214A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780029525.XA CN110073403A (en) 2017-11-21 2017-11-21 Image output generation method, equipment and unmanned plane
PCT/CN2017/112189 WO2019100214A1 (en) 2017-11-21 2017-11-21 Method, device, and unmanned aerial vehicle for generating output image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/112189 WO2019100214A1 (en) 2017-11-21 2017-11-21 Method, device, and unmanned aerial vehicle for generating output image

Publications (1)

Publication Number Publication Date
WO2019100214A1 true WO2019100214A1 (en) 2019-05-31

Family

ID=66630446

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/112189 WO2019100214A1 (en) 2017-11-21 2017-11-21 Method, device, and unmanned aerial vehicle for generating output image

Country Status (2)

Country Link
CN (1) CN110073403A (en)
WO (1) WO2019100214A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112771842A (en) * 2020-06-02 2021-05-07 深圳市大疆创新科技有限公司 Imaging method, imaging apparatus, computer-readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103941748A (en) * 2014-04-29 2014-07-23 百度在线网络技术(北京)有限公司 Autonomous navigation method and system and map modeling method and system
US20140241576A1 (en) * 2013-02-28 2014-08-28 Electronics And Telecommunications Research Institute Apparatus and method for camera tracking
CN105045279A (en) * 2015-08-03 2015-11-11 余江 System and method for automatically generating panorama photographs through aerial photography of unmanned aerial aircraft
CN105678754A (en) * 2015-12-31 2016-06-15 西北工业大学 Unmanned aerial vehicle real-time map reconstruction method
CN105874349A (en) * 2015-07-31 2016-08-17 深圳市大疆创新科技有限公司 Detection device, detection system, detection method, and removable device
CN105865454A (en) * 2016-05-31 2016-08-17 西北工业大学 Unmanned aerial vehicle navigation method based on real-time online map generation
CN106097304A (en) * 2016-05-31 2016-11-09 西北工业大学 A kind of unmanned plane real-time online ground drawing generating method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102138163B (en) * 2008-08-29 2014-04-30 三菱电机株式会社 Bird's-eye image forming device, bird's-eye image forming method
CN105627991B (en) * 2015-12-21 2017-12-12 武汉大学 A kind of unmanned plane image real time panoramic joining method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140241576A1 (en) * 2013-02-28 2014-08-28 Electronics And Telecommunications Research Institute Apparatus and method for camera tracking
CN103941748A (en) * 2014-04-29 2014-07-23 百度在线网络技术(北京)有限公司 Autonomous navigation method and system and map modeling method and system
CN105874349A (en) * 2015-07-31 2016-08-17 深圳市大疆创新科技有限公司 Detection device, detection system, detection method, and removable device
CN105045279A (en) * 2015-08-03 2015-11-11 余江 System and method for automatically generating panorama photographs through aerial photography of unmanned aerial aircraft
CN105678754A (en) * 2015-12-31 2016-06-15 西北工业大学 Unmanned aerial vehicle real-time map reconstruction method
CN105865454A (en) * 2016-05-31 2016-08-17 西北工业大学 Unmanned aerial vehicle navigation method based on real-time online map generation
CN106097304A (en) * 2016-05-31 2016-11-09 西北工业大学 A kind of unmanned plane real-time online ground drawing generating method

Also Published As

Publication number Publication date
CN110073403A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
WO2019100219A1 (en) Output image generation method, device and unmanned aerial vehicle
CN106529495B (en) Obstacle detection method and device for aircraft
KR101754599B1 (en) System and Method for Extracting Automatically 3D Object Based on Drone Photograph Image
US9013576B2 (en) Aerial photograph image pickup method and aerial photograph image pickup apparatus
CN107492069B (en) Image fusion method based on multi-lens sensor
WO2020014909A1 (en) Photographing method and device and unmanned aerial vehicle
JP5748561B2 (en) Aerial photography imaging method and aerial photography imaging apparatus
US20180262789A1 (en) System for georeferenced, geo-oriented realtime video streams
JP7251474B2 (en) Information processing device, information processing method, information processing program, image processing device, and image processing system
WO2018120350A1 (en) Method and device for positioning unmanned aerial vehicle
CN109255808B (en) Building texture extraction method and device based on oblique images
CN112461210B (en) Air-ground cooperative building surveying and mapping robot system and surveying and mapping method thereof
CN115641401A (en) Construction method and related device of three-dimensional live-action model
CN106094876A (en) A kind of unmanned plane target locking system and method thereof
CN113454685A (en) Cloud-based camera calibration
CN110275179A (en) A kind of building merged based on laser radar and vision ground drawing method
WO2019230604A1 (en) Inspection system
JPWO2018073878A1 (en) Three-dimensional shape estimation method, three-dimensional shape estimation system, flying object, program, and recording medium
CN111340942A (en) Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof
US20220113421A1 (en) Online point cloud processing of lidar and camera data
WO2019100214A1 (en) Method, device, and unmanned aerial vehicle for generating output image
KR20220069541A (en) Map making Platform apparatus and map making method using the platform
KR100956446B1 (en) Method for automatic extraction of optimal 3d-object facade texture using digital aerial images
JP2019207467A (en) Three-dimensional map correction device, three-dimensional map correction method, and three-dimensional map correction program
WO2021115192A1 (en) Image processing device, image processing method, program and recording medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17933048

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17933048

Country of ref document: EP

Kind code of ref document: A1