WO2019100214A1 - Procédé, dispositif et véhicule aérien sans pilote pour générer une image de sortie - Google Patents

Procédé, dispositif et véhicule aérien sans pilote pour générer une image de sortie Download PDF

Info

Publication number
WO2019100214A1
WO2019100214A1 PCT/CN2017/112189 CN2017112189W WO2019100214A1 WO 2019100214 A1 WO2019100214 A1 WO 2019100214A1 CN 2017112189 W CN2017112189 W CN 2017112189W WO 2019100214 A1 WO2019100214 A1 WO 2019100214A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
posture
aircraft
processor
point cloud
Prior art date
Application number
PCT/CN2017/112189
Other languages
English (en)
Chinese (zh)
Inventor
马岳文
张明磊
马东东
赵开勇
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2017/112189 priority Critical patent/WO2019100214A1/fr
Priority to CN201780029525.XA priority patent/CN110073403A/zh
Publication of WO2019100214A1 publication Critical patent/WO2019100214A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present application relates to the field of UAV application technologies, and in particular, to an output image generation method, device, and drone.
  • Digital Orthophoto Map is a digital aerial image/remote sensing image (monochrome/color) that is scanned and processed by digital elevation model. The projection difference is corrected by pixel, and then image mosaic. The image generated by stitching according to the range of the frame. Since the image uses a real terrain surface as a mosaic projection surface, it has real geographic coordinate information, and the true distance can be measured on the image.
  • the method for generating digital orthophotos in the prior art mainly collects the position and posture of the shooting device when shooting images by using a global positioning system (GPS) and an inertial measurement unit (IMU) mounted on the shooting device, and according to the position and posture, The image is projected onto the estimated average elevation surface, and the digital orthophoto is obtained after splicing.
  • GPS global positioning system
  • IMU inertial measurement unit
  • the embodiment of the invention provides an output image generation method, a device and a drone to obtain an output image with better stitching effect and reduce equipment cost.
  • a first aspect of the present invention provides a method for generating an output image, including:
  • the image is subjected to projection processing and image stitching processing to obtain an output image.
  • a second aspect of the embodiments of the present invention provides a ground station, including:
  • a communication interface one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
  • the communication interface is configured to: acquire an image captured by a photographing device mounted on the aircraft;
  • the processor is configured to: obtain, according to a preset image processing algorithm, a position and a posture of the photographing device when the image is captured;
  • the processor is further configured to perform a projection process and an image stitching process on the image to obtain an output image based on the position and the posture.
  • a third aspect of the embodiments of the present invention provides a controller, including:
  • a communication interface one or more processors; the one or more processors operating separately or in cooperation, the communication interface being coupled to the processor;
  • the communication interface is configured to: acquire an image captured by a photographing device mounted on the aircraft;
  • the processor is configured to: obtain, according to a preset image processing algorithm, a position and a posture of the photographing device when the image is captured;
  • the processor is further configured to perform a projection process and an image stitching process on the image to obtain an output image based on the position and the posture.
  • a fourth aspect of an embodiment of the present invention provides a computer readable storage medium comprising instructions, when executed on a computer, causing a computer to execute the output image generating method of the first aspect described above.
  • a fifth aspect of the embodiments of the present invention provides a drone, including:
  • a power system mounted to the fuselage for providing flight power
  • a photographing device mounted on the body for capturing an image
  • the output image generating method, the device and the drone provided by the embodiment of the present invention obtain the image obtained by the shooting device mounted on the aircraft, and calculate the position of the shooting device when the image is captured based on the preset image processing algorithm.
  • the posture is based on the position and posture of the photographing device when the image is captured, and the image is subjected to image processing and image stitching processing to obtain an output image. Since the position and posture of the photographing device when photographing the image are obtained by the preset image processing algorithm in the embodiment of the present invention, it is not necessary to mount the high-precision GPS and IMU on the aircraft to obtain a more accurate position and posture. , so that it can be reduced while obtaining a better spliced output image Equipment cost.
  • FIG. 2 is a schematic diagram of a connection between a ground station and an aircraft according to an embodiment of the present invention
  • FIG. 3 is a flowchart of an image projection method according to an embodiment of the present invention.
  • 4a and 4b are schematic diagrams showing output images of two identical scenes provided by the present invention.
  • FIG. 5a and FIG. 5b are schematic diagrams of output images in two identical scenarios according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of a method for generating an output image according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a ground station according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a controller according to an embodiment of the present invention.
  • a component when referred to as being "fixed” to another component, it can be directly on the other component or the component can be present. When a component is considered to "connect” another component, it can be directly connected to another component or possibly a central component.
  • Embodiments of the present invention provide an output image generation method, which may be performed by a ground station Or a controller mounted on the drone to execute.
  • the following embodiment is a detailed description of the ground station.
  • the implementation manner of the controller is similar to that of the ground station, and is not described in this embodiment.
  • FIG. 1 is a flowchart of a method for generating an output image according to the present invention. As shown in FIG. 1 , the method in this embodiment includes:
  • Step 101 Acquire an image captured by a shooting device mounted on the aircraft.
  • the ground station in this embodiment is a device having a computing function and/or processing capability, and the device may specifically be a remote controller, a smart phone, a tablet computer, a laptop computer, a watch, a wristband, and the like, and combinations thereof.
  • the aircraft in this embodiment may specifically be a drone equipped with a photographing device, a helicopter, a manned fixed-wing aircraft, a hot air balloon, or the like.
  • the ground station 21 and the aircraft 22 can be connected through an Application Programming Interface (API) 23, but are not limited to being connected through an API.
  • API Application Programming Interface
  • the ground station 21 and the aircraft 22 can be connected by wire or wirelessly, for example, by at least one of the following methods: WIreless-Fidelity (WI-FI), Bluetooth, software defined radio (software defined radio) , referred to as SDR) or other custom protocols.
  • WI-FI WIreless-Fidelity
  • Bluetooth software defined radio
  • SDR software defined radio
  • the aircraft can perform automatic cruising and photographing according to a predetermined route, and can also perform cruising and photographing under the control of the ground station.
  • the shooting device performs shooting according to a preset shooting interval or distance interval, and images captured by adjacent shooting moments have overlapping portions, wherein the size of the overlapping portion can be set according to needs, for example, by setting The corresponding shooting interval or distance interval is used to obtain the size of the overlap portion required, but it is not limited to determining the size of the image overlapping portion by setting the shooting interval or the distance interval.
  • the photographing device of the aircraft in this embodiment can be photographed in the following possible ways:
  • the shooting interval of the aircraft's shooting device changes, for example, when the aircraft is flying at a uniform absolute altitude, the aircraft's shooting device changes in a horizontal direction.
  • Shooting interval where, specifically, The shooting interval can be determined based on the pre-configured image overlap rate and the relative height of the aircraft and the surface.
  • the ground station can obtain the image obtained by the shooting device by using the following possible ways:
  • the aircraft transmits the image captured by the photographing device to the ground station in real time through the API between it and the ground station.
  • the aircraft transmits the image captured by the photographing device within the preset time interval to the ground station at preset time intervals.
  • the aircraft transmits the images captured by the photographing device during the entire cruise to the ground station.
  • the aircraft may transmit the image captured by the photographing device to the ground station in the form of code stream data, or may send the image to the ground station in the form of a thumbnail, but according to the computing power of the aircraft and the ground station,
  • the resolution of the returned stream data or thumbnail is not specifically limited and may be the original image.
  • taking the form of a thumbnail as an example when the image is sent to the ground station in the form of a thumbnail, the ground station can display the received thumbnail so that the user can clearly see the image obtained by the real-time shooting. .
  • Step 102 Calculate, according to a preset image processing algorithm, a position and a posture of the photographing device when the image is captured.
  • the preset image processing method in this embodiment may specifically be a motion recovery structure algorithm, an aerial triangulation algorithm, and a simultaneous localization and mapping (SLAM) algorithm.
  • the SLAM algorithm is taken as an example to calculate the position and posture of the photographing device when the image is captured.
  • the method for calculating the position and posture of the photographing device when the image is taken by the SLAM algorithm is similar to the prior art, and will not be described herein.
  • the SLAM algorithm calculates the position and posture of the photographing device based on the matching of image feature points, and therefore, the position and posture obtained by the SLAM calculation are relative positions and relatives in the shooting scene. attitude.
  • the position and posture corresponding to the image are more practical reference values.
  • the aircraft sends the image to the ground station.
  • the GPS information of the shooting position of the image is transmitted to the ground station.
  • the ground station converts the calculated position and posture into the world based on the GPS information corresponding to the image.
  • Position and attitude in coordinates; in another embodiment, world coordinates may be acquired using a method of identifying the identified identifier, and the calculated position and pose are converted to positions and poses in world coordinates.
  • Step 103 Perform projection processing and image splicing processing on the image based on the position and the posture to obtain an output image.
  • the output image involved in this embodiment may be specifically an orthophoto, such as an orthophoto map or other image with real geographic coordinate information obtained according to orthographic projection.
  • the image projection method based on the position and posture of the photographing device (which may be the relative position and the relative posture calculated by the SLAM algorithm, or the position and posture in the world coordinate system) in the embodiment includes the following:
  • the image is projected onto the average elevation surface according to the position and attitude of the photographing device by estimating the average elevation surface.
  • the way to obtain the estimated average elevation surface is similar to the prior art and will not be described here.
  • FIG. 3 is a flowchart of a method for image projection according to an embodiment of the present invention.
  • a method for projecting an image includes:
  • Step 301 Calculate a semi-dense or dense point cloud of the image based on the position and the posture, or calculate a sparse point cloud of the image based on a SLAM algorithm.
  • Step 302 Fit the terrain cloud based on the calculated point cloud.
  • Step 303 Project the image onto the terrain surface based on the position and posture of the image.
  • the method for calculating the image dense point cloud, the semi-dense point cloud, or the sparse point cloud in the embodiment of FIG. 3 may be any method in the prior art, which is not specifically limited in this embodiment.
  • the image may be first projected onto the surface of the fitted terrain based on the method shown in FIG. 3, and then the image on the surface of the terrain is stitched to obtain a relatively rough image. Output image.
  • the point cloud obtained by the above calculation may be first divided according to ground points and non-ground points, and the terrain surface is fitted according to ground points in the point cloud. Step, and then project the image onto the terrain surface according to the position and posture of the shooting device when shooting the image.
  • the point cloud obtained by the above calculation or/and the position and posture obtained by the above calculation are optimized to obtain a point cloud meeting the preset quality condition. Or / and position and attitude that meet the preset accuracy conditions.
  • the optimized point cloud is divided into ground points and non-ground points, so that the digital elevation model is generated by the ground point fitting in the optimized point cloud, and the digital elevation model is projected as the projected terrain surface.
  • the timing of dividing the point cloud into the ground point and the non-ground point in this embodiment is not uniquely limited.
  • the point cloud may be first divided, and then the point cloud or/and the position and posture are optimized. This embodiment does not specifically limit it.
  • the splicing processing method of the image in this embodiment may be one of the following methods: a direct overlay method, a panoramic image splicing method, a method for selecting a region closest to the image center in each region of the final image, and a cost based method.
  • the splicing method of the function In this embodiment, a method of fitting a terrain surface by using a point cloud is taken as an example to determine a projection surface, and a projection on the projection surface is spliced based on a cost function splicing method, that is, a distance from the projected pixel to the photographing device is used as a constraint construction cost.
  • the function based on the cost function, splicing the projection of the image onto the surface of the terrain, so that the color difference on both sides of the splicing line is minimized.
  • the ground station can process the received image by using the following two working modes:
  • the ground station processes the received image in a ready-to-go process. That is to say, when the ground station is in the cruising of the aircraft, the received image is processed to obtain a semi-dense point cloud, a dense point cloud or a sparse point cloud of the image. In this way, the ground station updates the semi-dense point cloud, dense point cloud or sparse point cloud obtained by the processing for each image received.
  • the above-mentioned processing method is not only the processing method included in the literal meaning, but depends on the processing speed of the ground station. If the processing speed of the ground station can support the reception or processing, then The ground station processes the image immediately after receiving the image.
  • the ground station sequentially processes the received image. Specifically, the ground station can follow the image.
  • the receiving sequence is processed, and may be processed according to the storage order of the images, and may be processed according to other custom processing sequences, which is not specifically limited in this embodiment.
  • the global color adjustment and/or brightness adjustment of the calculated point cloud may be first performed to achieve the purpose of significantly improving image quality, and further, based on the adjusted Projecting the image, constructing the cost function by using the distance from the projected pixel to the photographing device as a constraint, and stitching the projection of the image onto the surface of the terrain based on the cost function, so that the color difference on both sides of the stitching line is minimized, so that the integrity can be obtained.
  • the cost function by using the distance from the projected pixel to the photographing device as a constraint, and stitching the projection of the image onto the surface of the terrain based on the cost function, so that the color difference on both sides of the stitching line is minimized, so that the integrity can be obtained.
  • the non-ground point in the point cloud may also be excluded, so that the splicing is performed.
  • the line can automatically avoid non-ground areas, resulting in a better visual output image.
  • FIG. 4a and FIG. 4b are schematic diagrams of output images of two identical scenes provided by the present invention, wherein FIG. 4a is an output image obtained by using an estimated elevation surface as a projection surface, and FIG. 4b is formed by a point cloud fitting.
  • the output image obtained by the terrain surface as the projection surface is spliced by the cost function method.
  • the output image of Fig. 4a produces a severe stitching misalignment.
  • the terrain surface formed by point cloud fitting is used as the projection surface, the terrain surface can be fitted more accurately, and the cost function can be used to minimize the chromatic aberration on both sides of the splicing line, so the output is obtained. There is no obvious dislocation phenomenon in the image, and the overall output image is better overall. Therefore, in the embodiment of the present invention, the terrain surface is fitted by the point cloud, and the cost function is used to perform the splicing processing, which can solve the problem of the output image mosaic misalignment.
  • the color and brightness of the projection on the terrain surface may be adjusted based on a preset strategy. This enables a better stitching effect in the subsequent stitching process.
  • FIG. 5a and FIG. 5b are schematic diagrams of output images in two identical scenes according to an embodiment of the present invention.
  • the projection on the terrain surface in FIG. 5a is not processed by color and brightness, and therefore, the entire output in FIG. 5a
  • the integrity of the image in terms of color and brightness is not very good, depending on The effect is poor, and in Figure 5b, the brightness and color of the projection on the terrain surface are processed before the splicing, so the resulting output image is better in color and brightness, and the visual effect is better. it is good. Therefore, the embodiment of the present invention can effectively improve the visual effect of the output image by performing color and brightness processing on the projection on the terrain surface before the splicing process.
  • the step of displaying an output image may be further included, wherein the output image may be an orthophoto.
  • orthophotos are measurable, they can provide a large amount of geographic information, especially in the context of natural disasters such as earthquakes, as well as in agriculture, mapping, and transportation planning.
  • the output image generating method provided by the embodiment obtains an image captured by a photographing device mounted on the aircraft, and calculates a position and a posture of the photographing device when the image is captured based on a preset image processing algorithm, so that the photographing device is based on the photographing device.
  • the position and posture of the image are taken, and the image is processed by image processing and image stitching to obtain an output image. Since the position and posture of the photographing device when photographing the image are obtained by the preset image processing algorithm in the embodiment of the present invention, it is not necessary to mount the high-precision GPS and IMU on the aircraft to obtain a more accurate position and posture. Therefore, it is possible to reduce the equipment cost while obtaining a better spliced output image.
  • the present embodiment can generate real-time orthophotos, the integrated solution from image acquisition to orthophoto generation is greatly improved in comparison with the existing non-real-time orthophoto generation solution.
  • Job productivity, non-real-time orthophoto generation solutions typically require images to be imported from the aircraft to the computer and manipulated by the software for processing, and typically require several hours of processing time.
  • the orthophoto generation scheme provided in this embodiment can acquire a higher-precision orthophoto of the survey area after the aircraft data acquisition operation ends.
  • the embodiment of the present invention provides an output image generating method.
  • the photographing device may be specifically a camera.
  • an operator sets a cruise area and a cruise route for the aircraft through the ground station, and the aircraft collects images according to the flight route in the cruise area, and the aircraft collects the obtained images to take a thumbnail or
  • the code stream is sent to the ground station.
  • the ground station After receiving the image, the ground station initializes the SLAM algorithm and generates the initial semi-dense point of the shooting scene. Further, through the SLAM algorithm, the position and posture of the camera when the image is captured are calculated and generated by dense matching. The semi-dense point of the image.
  • the ground station fits the terrain surface based on the semi-dense points and projects the received image onto the fitted terrain surface according to the position and attitude at the time of shooting. Then the cost function is constructed based on the principle of minimum color difference on both sides of the stitching line. The cost function is used to find the optimal stitching line to splicing the image on the terrain surface.
  • the ground station may further determine whether the aircraft has completed image acquisition according to the time of the aircraft cruise or the number of images returned by the aircraft, and if so, the camera position and posture corresponding to the acquired image based on the SLAM algorithm, and The semi-dense point cloud is optimized to obtain a position and attitude that meets the preset accuracy requirements, as well as a point cloud that meets the preset quality requirements. Further, the ground station classifies the optimized point cloud, divides the point cloud into ground points and non-ground points, and re-fitting the terrain surface based on the divided ground points to re-project the image onto the re-fitted terrain surface. on.
  • the ground station can also perform global color adjustment on the projection on the surface of the terrain to ensure color consistency, and further construct a cost function to automatically wrap around when selecting the stitching line. Passing through non-ground points (such as buildings and other objects), the resulting output image will not have the problem of misalignment, and has a good visual effect.
  • FIG. 7 is a schematic structural diagram of a ground station according to an embodiment of the present invention.
  • the ground station 10 includes: a communication interface 11, one or more processors 12; and one or more processors work independently or in cooperation.
  • the interface 11 is connected to the processor 12; the communication interface 11 is configured to: acquire an image captured by a photographing device mounted on the aircraft; and the processor 12 is configured to: obtain, according to a preset image processing algorithm, the photographing device to capture the image The position and posture of the time; the processor 12 is further configured to: perform projection processing and image stitching processing on the image to obtain an output image based on the position and the posture.
  • the communication interface 11 is configured to: acquire code stream data of an image captured by a photographing device mounted on an aircraft.
  • the communication interface 11 is configured to: acquire a thumbnail of an image captured by a photographing device mounted on the aircraft.
  • the ground station further includes a display component 13, the display component 13 and the location
  • the processor 12 is communicatively coupled; the display component 13 is configured to: display the acquired thumbnail.
  • the communication interface 11 is further configured to: acquire GPS information of the photographing device when the image is captured; the processor 12 is further configured to: based on the GPS information corresponding to the image, The position is converted to a position in the world coordinate system, and the posture is converted into a posture in the world coordinate system.
  • the processor 12 is configured to: calculate a position and a posture of the photographing device when the image is captured, based on an instant positioning and map construction SLAM algorithm.
  • the processor 12 is configured to: construct a cost function, and perform splicing processing on the projection of the image onto the surface of the terrain based on the cost function.
  • the processor 12 is further configured to: perform optimization processing on the point cloud to obtain a point cloud that meets a preset quality condition; and the processor 12 is configured to: form a terrain based on the optimized point cloud fitting surface.
  • the processor 12 is configured to: extract a ground point from the optimized point cloud; and fit the terrain surface based on the ground point.
  • the processor 12 is configured to: perform optimization processing on the position and the posture, and obtain a position and a posture that meet a preset accuracy condition.
  • the processor 12 is configured to: perform color and/or brightness adjustment on a projection of the image on the terrain surface based on a preset policy.
  • the output image includes an orthophoto.
  • the display component 13 is configured to display the orthophoto.
  • the processor 12 is configured to: control the shooting device of the aircraft to shoot at the same shooting interval in the horizontal direction.
  • the processor 12 is configured to: control the shooting device of the aircraft to change the shooting interval for shooting.
  • the processor 12 is configured to: the photographing device that controls the aircraft is photographed in a horizontal direction at a time-lapse photographing interval, wherein the photographing interval is The overlap rate with the pre-configured image and the relative height of the aircraft to the surface.
  • the ground station provided in this embodiment can perform the technical solution of the embodiment of FIG. 1 and its execution manner Similar to the beneficial effects, it will not be described here.
  • the embodiment of the present invention further provides a ground station.
  • the ground station is based on the embodiment of FIG. 7.
  • the processor 12 is configured to: calculate a semi-dense or dense point cloud of the image based on the position and the posture. Or calculating a sparse point cloud of the image based on the SLAM algorithm; fitting the terrain cloud based on the calculated point cloud; and projecting the image onto the terrain surface based on the position and orientation of the image.
  • the ground station provided by this embodiment can perform the technical solution of the embodiment of FIG. 3, and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • FIG. 8 is a schematic structural diagram of a controller according to an embodiment of the present invention.
  • the controller 20 includes: a communication interface 21, one or more processors 22; and one or more processors alone or Working in cooperation, the communication interface 21 is connected to the processor 22; the communication interface 21 is configured to: acquire an image captured by a photographing device mounted on the aircraft; and the processor 22 is configured to: calculate, according to a preset image processing algorithm, the photographing device The position and posture of the image is taken; the processor 22 is further configured to: perform projection processing and image stitching processing on the image based on the position and the posture to obtain an output image.
  • the communication interface 21 is configured to: acquire code stream data of an image captured by a photographing device mounted on the aircraft.
  • the communication interface 21 is configured to: acquire a thumbnail of an image captured by a photographing device mounted on the aircraft.
  • the communication interface 21 is further configured to: acquire GPS information when the imaging device captures the image; and the processor 22 is further configured to: based on the GPS information corresponding to the image, The position is converted to a position in the world coordinate system, and the posture is converted into a posture in the world coordinate system.
  • the processor 22 is configured to: calculate a position and a posture of the photographing device when the image is captured based on a real-time positioning and map construction SLAM algorithm.
  • the processor 22 is configured to: construct a cost function, and perform splicing processing on the projection of the image onto the surface of the terrain based on the cost function.
  • the processor 22 is further configured to: perform optimization processing on the point cloud to obtain a character a point cloud in combination with a preset quality condition; the processor 22 is configured to form a terrain surface based on the optimized point cloud fitting.
  • the processor 22 is configured to: extract a ground point from the optimized point cloud; and fit the terrain surface based on the ground point.
  • the processor 22 is configured to: perform optimization processing on the position and the posture, and obtain a position and a posture that meet a preset accuracy condition.
  • the processor 22 is configured to perform color and/or brightness adjustment on a projection of the image on the terrain surface based on a preset policy.
  • the output image includes an orthophoto.
  • the processor 22 is configured to: control the shooting device of the aircraft to shoot at the same shooting interval in the horizontal direction.
  • the processor 22 is configured to: control the shooting device of the aircraft to change the shooting interval for shooting.
  • the processor 22 is configured to: the photographing device that controls the aircraft is photographed in a horizontal direction at a time-lapse photographing interval, wherein the photographing interval is The overlap rate with the pre-configured image and the relative height of the aircraft to the surface.
  • the controller provided in this embodiment can perform the technical solution of the embodiment of FIG. 1 , and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • the embodiment of the present invention further provides a controller, based on the embodiment of FIG. 8, the processor 22 is configured to: calculate a semi-dense or dense point cloud of the image based on the position and the posture. Or calculating a sparse point cloud of the image based on the SLAM algorithm; fitting the terrain cloud based on the calculated point cloud; and projecting the image onto the terrain surface based on the position and orientation of the image.
  • the controller provided in this embodiment can perform the technical solution of the embodiment of FIG. 3, and the execution manner and the beneficial effects are similar, and details are not described herein again.
  • Embodiments of the present invention provide a computer readable storage medium including instructions when it is in a calculation When the machine is running, the computer is caused to execute the output image generating method provided by the above embodiment.
  • Embodiments of the present invention provide a drone.
  • the drone includes a fuselage; a power system mounted to the body for providing flight power; a photographing device mounted to the body for capturing an image; and a controller as described in the above embodiments.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the above software functional unit is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform the methods of the various embodiments of the present invention. Part of the steps.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne, selon les modes de réalisation, un procédé, un dispositif et un véhicule aérien sans pilote destinés à générer une image de sortie. Le procédé comprend : l'obtention d'une image par un dispositif de capture d'image installé sur un aéronef; le calcul, sur la base d'un algorithme de traitement d'image prédéfini, d'une position et d'une attitude du dispositif de capture d'image lorsque l'image a été capturée; et la réalisation, sur la base de la position et de l'attitude, d'un traitement de projection et d'un traitement d'assemblage d'image sur l'image pour obtenir une image de sortie. Le procédé, le dispositif et le véhicule aérien sans pilote décrits par les modes de réalisation de l'invention peuvent réduire les coûts d'équipement tout en produisant une meilleure image de sortie assemblée.
PCT/CN2017/112189 2017-11-21 2017-11-21 Procédé, dispositif et véhicule aérien sans pilote pour générer une image de sortie WO2019100214A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/112189 WO2019100214A1 (fr) 2017-11-21 2017-11-21 Procédé, dispositif et véhicule aérien sans pilote pour générer une image de sortie
CN201780029525.XA CN110073403A (zh) 2017-11-21 2017-11-21 输出影像生成方法、设备及无人机

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/112189 WO2019100214A1 (fr) 2017-11-21 2017-11-21 Procédé, dispositif et véhicule aérien sans pilote pour générer une image de sortie

Publications (1)

Publication Number Publication Date
WO2019100214A1 true WO2019100214A1 (fr) 2019-05-31

Family

ID=66630446

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/112189 WO2019100214A1 (fr) 2017-11-21 2017-11-21 Procédé, dispositif et véhicule aérien sans pilote pour générer une image de sortie

Country Status (2)

Country Link
CN (1) CN110073403A (fr)
WO (1) WO2019100214A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112154484A (zh) * 2019-09-12 2020-12-29 深圳市大疆创新科技有限公司 正射影像生成方法、系统和存储介质
WO2021243566A1 (fr) * 2020-06-02 2021-12-09 深圳市大疆创新科技有限公司 Procédé et appareil d'imagerie, et support d'enregistrement lisible par ordinateur

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103941748A (zh) * 2014-04-29 2014-07-23 百度在线网络技术(北京)有限公司 自主导航方法及系统和地图建模方法及系统
US20140241576A1 (en) * 2013-02-28 2014-08-28 Electronics And Telecommunications Research Institute Apparatus and method for camera tracking
CN105045279A (zh) * 2015-08-03 2015-11-11 余江 一种利用无人飞行器航拍自动生成全景照片的系统及方法
CN105678754A (zh) * 2015-12-31 2016-06-15 西北工业大学 一种无人机实时地图重建方法
CN105865454A (zh) * 2016-05-31 2016-08-17 西北工业大学 一种基于实时在线地图生成的无人机导航方法
CN105874349A (zh) * 2015-07-31 2016-08-17 深圳市大疆创新科技有限公司 探测装置、探测系统、探测方法,以及可移动设备
CN106097304A (zh) * 2016-05-31 2016-11-09 西北工业大学 一种无人机实时在线地图生成方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010024212A1 (fr) * 2008-08-29 2010-03-04 三菱電機株式会社 Dispositif, procédé et programme de formation d’image en plongée
CN105627991B (zh) * 2015-12-21 2017-12-12 武汉大学 一种无人机影像实时全景拼接方法及系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140241576A1 (en) * 2013-02-28 2014-08-28 Electronics And Telecommunications Research Institute Apparatus and method for camera tracking
CN103941748A (zh) * 2014-04-29 2014-07-23 百度在线网络技术(北京)有限公司 自主导航方法及系统和地图建模方法及系统
CN105874349A (zh) * 2015-07-31 2016-08-17 深圳市大疆创新科技有限公司 探测装置、探测系统、探测方法,以及可移动设备
CN105045279A (zh) * 2015-08-03 2015-11-11 余江 一种利用无人飞行器航拍自动生成全景照片的系统及方法
CN105678754A (zh) * 2015-12-31 2016-06-15 西北工业大学 一种无人机实时地图重建方法
CN105865454A (zh) * 2016-05-31 2016-08-17 西北工业大学 一种基于实时在线地图生成的无人机导航方法
CN106097304A (zh) * 2016-05-31 2016-11-09 西北工业大学 一种无人机实时在线地图生成方法

Also Published As

Publication number Publication date
CN110073403A (zh) 2019-07-30

Similar Documents

Publication Publication Date Title
WO2019100219A1 (fr) Procédé et dispositif de génération d'image de sortie, et véhicule aérien sans pilote
KR101754599B1 (ko) 드론 촬영 이미지를 기반으로 3d 오브젝트를 자동으로 추출하는 시스템 및 방법
CN106529495B (zh) 一种飞行器的障碍物检测方法和装置
WO2020014909A1 (fr) Procédé et dispositif de photographie, et véhicule aérien sans pilote
JP7556383B2 (ja) 情報処理装置、情報処理方法、情報処理プログラム、画像処理装置および画像処理システム
CN107492069B (zh) 基于多镜头传感器的图像融合方法
JP5748561B2 (ja) 航空写真撮像方法及び航空写真撮像装置
US20180262789A1 (en) System for georeferenced, geo-oriented realtime video streams
WO2018120350A1 (fr) Procédé et dispositif de positionnement de véhicule aérien sans pilote
CN112461210B (zh) 一种空地协同建筑测绘机器人系统及其测绘方法
KR20200064542A (ko) 드론을 이용한 지상기준점 측량장치 및 그 방법
CN109255808B (zh) 基于倾斜影像的建筑物纹理提取方法和装置
CN109655065A (zh) 一种无人机五航线规划方法及装置
CN115641401A (zh) 一种三维实景模型的构建方法及相关装置
WO2019230604A1 (fr) Système d'inspection
WO2022077218A1 (fr) Traitement de nuage de points en ligne de données lidar et de caméra
CN106094876A (zh) 一种无人机目标锁定系统及其方法
CN113454685A (zh) 基于云的相机标定
CN110275179A (zh) 一种基于激光雷达以及视觉融合的构建地图方法
CN111340942A (zh) 一种基于无人机的三维重建系统及其方法
WO2019100214A1 (fr) Procédé, dispositif et véhicule aérien sans pilote pour générer une image de sortie
KR20220069541A (ko) 지도제작플랫폼장치 및 이를 이용한 지도제작방법
KR100956446B1 (ko) 디지털 항공영상을 이용하여 3차원 객체의 외관 텍스쳐 자동 추출방법
JP2019207467A (ja) 3次元マップ補正装置、3次元マップ補正方法及び3次元マップ補正プログラム
WO2021115192A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image, programme, et support d'enregistrement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17933048

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17933048

Country of ref document: EP

Kind code of ref document: A1