CN110926475B - Unmanned aerial vehicle waypoint generation method and device and electronic equipment - Google Patents

Unmanned aerial vehicle waypoint generation method and device and electronic equipment Download PDF

Info

Publication number
CN110926475B
CN110926475B CN201911219197.7A CN201911219197A CN110926475B CN 110926475 B CN110926475 B CN 110926475B CN 201911219197 A CN201911219197 A CN 201911219197A CN 110926475 B CN110926475 B CN 110926475B
Authority
CN
China
Prior art keywords
map
road
point
shooting
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911219197.7A
Other languages
Chinese (zh)
Other versions
CN110926475A (en
Inventor
赵东
马华东
吕点
张献忠
曹子建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201911219197.7A priority Critical patent/CN110926475B/en
Publication of CN110926475A publication Critical patent/CN110926475A/en
Application granted granted Critical
Publication of CN110926475B publication Critical patent/CN110926475B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • Instructional Devices (AREA)

Abstract

The embodiment of the invention provides an unmanned aerial vehicle waypoint generating method, an unmanned aerial vehicle waypoint generating device and electronic equipment, which are applied to the technical field of image automatic processing, and the method comprises the following steps: acquiring a map of a target area of a panoramic map to be established; generating a road map corresponding to the map based on the image data of the map; identifying a plurality of first shot points and a plurality of second shot points in the road map; generating a contour map representing contours of respective buildings in the map based on image data of the map; correcting the outline of each building in the outline map according to a minimum rectangular coverage mode, determining specified position points of each side of the outline of each building from the corrected outline map, and obtaining a plurality of third shooting points; and acquiring longitude and latitude coordinates of each first shooting point, each second shooting point and each third shooting point to serve as the navigation point of the unmanned aerial vehicle. By the method, the unmanned aerial vehicle waypoints can be automatically generated when the unmanned aerial vehicle shoots the panoramic map.

Description

Unmanned aerial vehicle waypoint generation method and device and electronic equipment
Technical Field
The invention relates to the technical field of image automatic processing, in particular to a method and a device for generating a waypoint of an unmanned aerial vehicle and electronic equipment.
Background
Unmanned aerial vehicle can realize the function of professional image aerial photography, and has advantages such as small, flexibility, automation. Therefore, the unmanned aerial vehicle is also applied to the panoramic map service technology.
In the prior art, when the unmanned aerial vehicle is used for shooting the panoramic map, an engineer needs to determine a waypoint shot by the unmanned aerial vehicle by experience, and then longitude and latitude data of the waypoint are transmitted to the unmanned aerial vehicle. And finally, splicing the photos shot at each navigation point into a panoramic photo.
Therefore, in the existing mode of manually determining the waypoints shot by the unmanned aerial vehicle, the waypoints need to be manually determined by engineers, manpower is consumed, and the efficiency is low. Therefore, how to automatically generate the unmanned aerial vehicle waypoint when the unmanned aerial vehicle shoots the panoramic map is a problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention aims to provide a method and a device for generating a waypoint of an unmanned aerial vehicle and electronic equipment, so that the automatic generation of the waypoint of the unmanned aerial vehicle is realized when the unmanned aerial vehicle shoots a panoramic map. The specific technical scheme is as follows:
the embodiment of the invention provides an unmanned aerial vehicle waypoint generating method which is applied to electronic equipment and comprises the following steps:
acquiring a map of a target area of a panoramic map to be established;
generating a road map corresponding to the map based on the image data of the map, wherein the road map is an image used for representing each road in the map;
identifying a plurality of first shot points and a plurality of second shot points in the road map; the first shooting point is an intersection of a road, an inflection point of the road, a starting point of the road or an end point of the road; the second shooting point is a position point on the road in the road map except for a road starting point and a road ending point;
generating a contour map representing contours of respective buildings in the map based on image data of the map;
correcting the outline of each building in the outline map according to a minimum rectangular coverage mode, determining specified position points of each side of the outline of each building from the corrected outline map, and obtaining a plurality of third shooting points;
and acquiring longitude and latitude coordinates of each first shooting point, each second shooting point and each third shooting point to serve as the navigation point of the unmanned aerial vehicle.
Optionally, the manner of identifying the plurality of second shot points in the road map includes:
and equally dividing each road in the road map according to a preset length to obtain a plurality of second shooting points, wherein each second shooting point is an equal division point.
Optionally, determining, from the corrected outline map, designated position points of each side of the outline of each building to obtain a plurality of third shot points, including:
from the corrected outline map, the midpoints of the sides of the outline of each building are specified, and a plurality of third shot points are obtained.
Optionally, before obtaining longitude and latitude coordinates of each of the first shooting point, the second shooting point, and the third shooting point, the method further includes:
removing one first shooting point of two first shooting points with the distance smaller than a first threshold value;
removing second shooting points with the distance from any first shooting point smaller than a second threshold value from all the second shooting points;
and removing third shooting points with the distance from any first shooting point or any second shooting point smaller than a third threshold value from all third shooting points belonging to the short side of the outline of the building.
And one third shooting point of the two third shooting points which belong to different buildings and have the distance smaller than a fourth threshold value is removed.
Optionally, the obtaining a map of a target area of a panoramic map to be established includes:
acquiring a map of an area with a preset color from a preset global map, wherein the map is used as a map of a target area of a panoramic map to be established; the preset colors comprise a first color and a second color, the first color is a color for marking each road, and the second color is a color for marking each building.
Optionally, the generating a road map corresponding to the map based on the image data of the map includes:
determining, in map data of the map, respective first areas having the first color;
carrying out binarization processing on the map based on each determined first area to obtain a road map corresponding to the map;
wherein, based on the determined first areas, the binarization processing of the map comprises setting the first areas in the map to be white and setting areas outside the first areas to be black.
Optionally, the generating a contour map for representing the contour of each building in the map based on the image data of the map includes:
determining, in map data of the map, respective second areas having the second color;
carrying out binarization processing on the map based on the determined second areas, and generating a contour map of the contour of each building in the map based on the gradient values of adjacent pixel points in the map after binarization processing;
wherein the binarizing processing of the map based on the determined second areas includes setting the second areas in the map to be white and setting areas other than the second areas to be black.
The embodiment of the invention also provides an unmanned aerial vehicle waypoint generating device, which is applied to electronic equipment, and the device comprises:
the map acquisition module is used for acquiring a map of a target area of the panoramic map to be established;
the road map generating module is used for generating a road map corresponding to the map based on the image data of the map, wherein the road map is an image used for representing each road in the map;
the shot point identification module is used for identifying a plurality of first shot points and a plurality of second shot points in the road map; the first shooting point is an intersection of a road, an inflection point of the road, a starting point of the road or an end point of the road; the second shooting point is a position point on the road in the road map except for a road starting point and a road ending point;
a contour map generation module for generating contour maps representing the contours of the buildings in the map based on the image data of the map;
the position point determining module is used for correcting the outline of each building in the outline map according to a minimum rectangular coverage mode, determining the appointed position points of each side of the outline of each building from the corrected outline map, and obtaining a plurality of third shooting points;
and the coordinate obtaining module is used for obtaining longitude and latitude coordinates of each first shooting point, each second shooting point and each third shooting point to serve as the navigation point of the unmanned aerial vehicle.
The embodiment of the invention also provides electronic equipment which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the steps of any unmanned aerial vehicle waypoint generating method provided by the embodiment of the invention when executing the program stored in the memory.
The embodiment of the invention also provides a computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps of any unmanned aerial vehicle waypoint generating method provided by the embodiment of the invention are realized.
The embodiment of the invention has the following beneficial effects:
according to the unmanned aerial vehicle waypoint generation method, the unmanned aerial vehicle waypoint generation device and the electronic equipment, the map of the target area of the panoramic map to be established can be obtained, the road map corresponding to the map is generated based on the image data of the map, and the plurality of first shooting points and the plurality of second shooting points in the road map are obtained; generating an outline map for representing the outline of each building in the map based on the image data of the map, and obtaining a plurality of third shooting points in the outline map; and acquiring longitude and latitude coordinates of each first shooting point, each second shooting point and each third shooting point to serve as the navigation point of the unmanned aerial vehicle. According to the scheme, the electronic equipment automatically obtains each shooting point of the unmanned aerial vehicle from the map of the target area according to the preset identification rule without manually determining the navigation point by the user, and the longitude and latitude coordinates of the shooting point of the unmanned aerial vehicle obtained from the map are used as the navigation point of the unmanned aerial vehicle. It is thus clear that this scheme can reach the effect of automatic generation unmanned aerial vehicle waypoint when unmanned aerial vehicle shoots the panorama map.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for generating waypoints of an unmanned aerial vehicle according to an embodiment of the present invention;
fig. 2 is another flowchart of a method for generating waypoints for an unmanned aerial vehicle according to an embodiment of the present invention;
fig. 3 is another flowchart of a method for generating waypoints for an unmanned aerial vehicle according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a map including waypoints of a drone provided in an embodiment of the invention;
fig. 5 is a schematic view showing a positional relationship between a first shot point and a second shot point located on a road;
FIG. 6 is a schematic diagram of a gray scale map of a region having a predetermined color provided in an embodiment of the present invention;
fig. 7 is a schematic diagram of a road map obtained after a map binarization process is performed according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a road map with a first shot point according to an embodiment of the present invention;
fig. 9(a) is a schematic diagram of a map obtained after map binarization processing in a process of generating a contour map;
FIG. 9(b) is a schematic view of an outline corresponding to the map shown in FIG. (a);
FIG. 10 is a schematic view of a modified profile view of the profile view shown in FIG. 9 (b);
FIG. 11 is a schematic diagram of an outline diagram with a third shot point according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an unmanned aerial vehicle waypoint generating apparatus provided in an embodiment of the present invention;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to realize automatic generation of unmanned aerial vehicle waypoints when an unmanned aerial vehicle shoots a panoramic map, the embodiment of the invention provides an unmanned aerial vehicle waypoint automatic generation method, an unmanned aerial vehicle waypoint automatic generation device and electronic equipment.
First, a method for generating a waypoint of an unmanned aerial vehicle provided by an embodiment of the invention is described below.
It should be noted that the waypoint of the unmanned aerial vehicle mentioned in the embodiment of the present invention is a shooting point determined when the unmanned aerial vehicle shoots the panoramic map. The unmanned aerial vehicle shoots the photos at the waypoints, and then the photos shot at each waypoint are spliced together, so that the panoramic map can be obtained. The implementation manner of splicing the pictures taken at each navigation point does not belong to the invention point of the present invention, and is not limited herein.
In addition, the unmanned aerial vehicle waypoint generating method provided by the embodiment of the invention is applied to electronic equipment. This electronic equipment can be the equipment of control unmanned aerial vehicle. In a specific application, the electronic device may be a computer, a tablet computer, or the like.
As shown in fig. 1, the method for generating a waypoint of an unmanned aerial vehicle provided in the embodiment of the present invention may include the following steps:
s101, obtaining a map of a target area of a panoramic map to be established;
s102, generating a road map corresponding to the map based on the image data of the map, wherein the road map is an image used for representing each road in the map;
s103, identifying a plurality of first shooting points and a plurality of second shooting points in the road map; the first shooting point is an intersection of a road, an inflection point of the road, a starting point of the road or an end point of the road; the second shooting point is a position point on the road in the road map except for a road starting point and a road ending point;
s104, generating a contour map for representing the contour of each building in the map based on the image data of the map;
s105, correcting the outline of each building in the outline map according to a minimum shape coverage mode, determining the appointed position points of each side of the outline of each building from the corrected outline map, and obtaining a plurality of third shooting points;
and S106, acquiring longitude and latitude coordinates of each first shooting point, each second shooting point and each third shooting point to serve as the navigation point of the unmanned aerial vehicle.
Wherein, the execution order of S102-S103, and S104-S105 is not strict, for example: S102-S103 may be performed first, followed by S104-S105; alternatively, S104-S105 may be performed first, followed by S102-S103; alternatively, S102-S103, and S104-S105 may be performed simultaneously.
In S101, acquiring a map of a target area of the panoramic map to be established is specifically acquiring a planar map of the target area of the panoramic map to be established. For example, a planar map of a target area of a panoramic map to be created may be obtained from an existing map client such as a Baidu map and a Gandy map by using a specified API (Application Program Interface). In order to obtain the planar map of the target area of the panoramic map to be established, the planar map of the target map can be obtained by setting a preset color for the map area of the target map in the map client. For clarity of the description of the solution, the following describes how to obtain the map of the target area of the panoramic map to be created by setting the predetermined color for the map with reference to the specific embodiment.
The target area can be an exhibition hall, a scenic spot, a campus, a residential area and the like.
In S102, the image data of the map includes road data of the target area, and a road map corresponding to the map is generated based on the picture data. There are various methods for generating a road map corresponding to the map based on the image data of the map. For example, in one implementation, a user may perform road calibration on a map of the target area, so as to generate a road map corresponding to the map based on a calibration result of the road.
In another implementation, when the plane map may have a predetermined color, a road map corresponding to the map may be generated based on the predetermined color in the map. For clarity, how to generate the road map corresponding to the map based on the predetermined color will be described in detail later with reference to specific embodiments.
In S103, there are various implementations of identifying the plurality of first shot points and the plurality of second shot points in the road.
For the first shot point, in an implementation manner, the corner point detection may be performed on the obtained road map, and for example, the corner point detection may be performed on the obtained road map by using a Harris (Harris) corner point detection method to obtain a plurality of first shot points. Specifically, the Harris corner detection method includes moving a characteristic window on an obtained road image, and when large gray level change is generated in a window area, determining that a corner is encountered inside the window; after the angular point detection, a plurality of first shooting points are obtained through angular point gravity center correction. In another implementation, intersection points, end points, and turning points in the road map, that is, intersections, starting points of the roads, middle points of the roads, and turning points of the roads, are extracted from the road map by any conventional method, and the extracted points are used as the first shot points.
For the second shot point, in one implementation, the roads in the road map may be equally divided according to a predetermined length to obtain a plurality of second shot points, and each second shot point is an equal division point. Wherein, this predetermined length can be the map distance that the biggest actual distance that unmanned aerial vehicle can shoot corresponds, and exemplarily, the biggest actual distance that unmanned aerial vehicle can shoot can be 10m, 20m etc. if the map scale is 1: 10000, then the predetermined length may be 1mm, 2mm, etc. In another implementation, the number of position points corresponding to the length of each road in the road map may be determined as the number of position points of each road according to a predetermined correspondence relationship between the road length range and the number of position points, and then, for each road in the road map, the road may be equally divided according to the number of position points of the road, and the equally divided point may be used as the second shot point.
In S104, the image data of the map includes building data of the target area, and based on the picture data, an outline map representing outlines of the respective buildings in the map is generated. There are various methods for generating a contour map of the contour of each building in the map based on the image data of the map. For example, in one implementation, the image data of the map may be used to map an outline of a building in the image data.
In another implementation, when the plan map may have a predetermined color, an outline map for representing outlines of respective buildings in the map may be generated based on the predetermined color in the map. For clarity, how to generate the outline map for representing the outline of each building in the map based on the predetermined color will be described in detail later with reference to specific embodiments.
In S105, the method for correcting the outline of each building in the outline map according to the minimum rectangular coverage method may specifically include: and performing minimum area rectangular coverage on the outline of each building in the obtained outline map, namely, for each building, covering a corresponding minimum rectangle for the outline of the building, wherein the length dimension of the minimum rectangle is the maximum dimension of the length of the outline of the building, and the width dimension of the minimum rectangle is the maximum dimension of the width of the outline of the building. This achieves a complete coverage of the outline of the building with the smallest rectangle.
In one implementation, determining the designated position points of each side of the outline of each building from the corrected outline map to obtain a plurality of third shot points may include: from the corrected outline map, the midpoints of the sides of the outline of each building are specified, and a plurality of third shot points are obtained. In another implementation, determining the designated position points of each side of the contour of each building from the corrected contour map to obtain a plurality of third shooting points may include: from the corrected outline map, the midpoints and the end points of the sides of the outline of each building are specified, and a plurality of third shot points are obtained.
In S106, a manner of obtaining longitude and latitude coordinates of each of the first shooting point, the second shooting point, and the third shooting point may include, for example: and acquiring longitude and latitude coordinates of each first shooting point, each second shooting point and each third shooting point through clients such as the existing Baidu map and the existing Gaode map.
For ease of understanding, fig. 4 illustratively presents a schematic view of a map containing the waypoints of the drone. The unmanned aerial vehicle waypoint in fig. 4 is determined by the method provided by the embodiment of the invention, and each dot in fig. 4 is the unmanned aerial vehicle waypoint.
According to the unmanned aerial vehicle waypoint generation method, the unmanned aerial vehicle waypoint generation device and the electronic equipment, the map of the target area of the panoramic map to be established can be obtained, the road map corresponding to the map is generated based on the image data of the map, and the plurality of first shooting points and the plurality of second shooting points in the road map are obtained; generating an outline map for representing the outline of each building in the map based on the image data of the map, and obtaining a plurality of third shooting points in the outline map; and acquiring longitude and latitude coordinates of each first shooting point, each second shooting point and each third shooting point to serve as the navigation point of the unmanned aerial vehicle. According to the scheme, the electronic equipment automatically obtains each shooting point of the unmanned aerial vehicle from the map of the target area according to the preset identification rule without manually determining the navigation point by the user, and the longitude and latitude coordinates of the shooting point of the unmanned aerial vehicle obtained from the map are used as the navigation point of the unmanned aerial vehicle. It is thus clear that this scheme can reach the effect of automatic generation unmanned aerial vehicle waypoint when unmanned aerial vehicle shoots the panorama map.
In order to reduce the number of waypoints and reduce the working strength of the unmanned aerial vehicle, before step S106, the number of the acquired first shooting points, second shooting points and third shooting points may be optimized, as shown in fig. 2, the specific steps are as follows:
s201, removing one first shooting point from two first shooting points with the distance smaller than a first threshold value;
s202, eliminating second shooting points with the distance from any first shooting point to any first shooting point smaller than a second threshold value from all the second shooting points;
and S203, eliminating third shooting points of which the distance from any first shooting point or any second shooting point is smaller than a third threshold value from the third shooting points belonging to the short sides of the outlines of the buildings.
And S204, one third shooting point of two third shooting points which belong to different buildings and have a distance smaller than a fourth threshold value is removed.
The steps described in S201 to S204 above are not executed in a strict order.
In S201, since the length of the road has uncertainty and the first-class shot points of some roads are closer to each other, in order to optimize the shot points of the road, one of the two first shot points whose distance is smaller than the first threshold is eliminated. The two first photographing points may be located on the same road, or the two first photographing points may be located on different roads but communicate with each other.
In S202, since the road lengths are considered to have uncertainty, the road lengths may not all be multiples of the predetermined length. Therefore, some second shooting points exist, the distance between the second shooting points and the first shooting points is short, repetition is caused, and in order to achieve optimization of the road shooting points, the second shooting points, the distance between which and any one first shooting point is smaller than the second threshold value, are removed. For example, as shown in fig. 5, in the figure, a1 and a2 are first shooting points, B1, B2, B3 and B4 are second shooting points, and the second shooting point B4 in the circle is closer to the first shooting point a2, so that the second shooting point B4 is eliminated.
In S203, since most buildings are surrounded by roads, the main appearance of the building is shown by the front of the building, i.e. by the long side of the outline of the building. In view of the above two cases, when there is a third shot point belonging to the short side of the outline of the building and the distance from the first shot point or the second shot point is less than the third threshold value, the third shot point is eliminated.
In S204, since the buildings may be built in groups and close to each other, and the panoramic map mentioned in the embodiment of the present invention is characterized by being rotatable by 360 °, two buildings whose adjacent distances are smaller than the fourth threshold value may be photographed using one photographing point.
The first threshold, the second threshold, the third threshold, and the fourth threshold may be predetermined length thresholds, where the predetermined length thresholds may be map distances corresponding to a maximum actual distance that the unmanned aerial vehicle can shoot, for example, the maximum actual distance that the unmanned aerial vehicle can shoot may be 10m, 20m, and the like, and if the map scale is 1: 10000, then the predetermined length threshold may be 1mm, 2mm, etc.
The method provided by the embodiment of the invention can optimize the quantity of the acquired first shooting points, the second shooting points and the third shooting points, reduce the shooting task of the unmanned aerial vehicle and further improve the working efficiency of the unmanned aerial vehicle when shooting the panoramic map.
The following describes in detail a case where a map of a target area to be created into a panoramic map has a predetermined color. As shown in fig. 3, the method for generating a waypoint of an unmanned aerial vehicle provided in the embodiment of the present invention may include the following steps:
s301, acquiring a map of an area with a preset color from a preset global map, wherein the map is used as a map of a target area of a panoramic map to be established; the preset colors comprise a first color and a second color, the first color is a color for marking each road, and the second color is a color for marking each building.
S3021, determining respective first areas having the first color in the map data of the map;
s3022, carrying out binarization processing on the map based on each determined first area to obtain a road map corresponding to the map;
wherein, based on the determined first areas, the binarization processing of the map comprises setting the first areas in the map to be white and setting areas outside the first areas to be black.
S303, identifying a plurality of first shooting points and a plurality of second shooting points in the road map, wherein the first shooting points are intersections of roads, inflection points of the roads, starting points of the roads or end points of the roads; the second shooting point is a position point on the road in the road map except for a road starting point and a road ending point;
s3041, in the map data of the map, determining respective second areas having the second color;
s3042, carrying out binarization processing on the map based on the determined second areas, and generating a contour map of the contour of each building in the map based on the gradient values of adjacent pixel points in the map after binarization processing;
wherein the binarization processing is to set each second region in the map to white and to set regions other than each second region to black.
S305, correcting the outline of each building in the outline map according to a minimum rectangular coverage mode, determining the designated position points of each side of the outline of each building from the corrected outline map, and obtaining a plurality of third shooting points;
and S306, acquiring longitude and latitude coordinates of each first shooting point, each second shooting point and each third shooting point to serve as the navigation point of the unmanned aerial vehicle.
In step S301, the global map is a large-scale map provided by the client, and in this scheme, the map of the target area is an image of a partial area in the large-scale map. The partial region has a predetermined color. For example, the colors of elements in the map can be customized by using the personalized map API, so that the colors of the map can be customized by using the personalized map API, and then the electronic device can obtain a map of an area with a predetermined color from a predetermined global map as a map of a target area of the panoramic map to be established. Fig. 6 exemplarily shows a gray scale of a map of an area having a predetermined color, where black area blocks represent areas of buildings having a second color, and black lines represent areas of roads having a first color. The first color is a color for labeling each road, for example, the first color may be purple with RGB (red green blue) values of 58, 32, 204, or green with RGB values of 0, 255, 0, etc.; the second color is a color for labeling each building, for example, the second color may be yellow with RGB values of 255, 255, 0, or red with RGB values of 255, 0, etc.
In step S3021, for example, color matching detection may be performed on the map data, RGB values in the map may be detected, and respective regions having RGB values of the first color may be determined as respective first regions.
In step S3022, for example, the map may be grayed, and the grayed map may be binarized based on each of the determined first areas. As shown in fig. 7, the binarization processing is such that the road portion in the map is set to white and the portion other than the road is set to black.
In S303, a plurality of shot points in the road map after the binarization processing are identified, and for example, Harris corner point detection may be performed on the binarized road map obtained in S3022. Specifically, the Harris corner detection method is to move a characteristic window on the obtained road map, and when a large gray level change is generated in a window area, the corner is considered to be encountered inside the window. And after angular point detection, correcting the center of gravity of the angular point to obtain a first shooting point. As shown in fig. 8, the dots in the figure are the first shot points obtained.
In S3041, for example, color matching detection may be performed on the map data, RGB values in the map are detected, and respective regions whose RGB values are RGB values of the second color are determined as respective second regions.
In step S3042, for example, the map may be grayed, and the grayed map is binarized based on the determined second areas to obtain a schematic diagram of the building after binarization, and then a building contour map is generated by using gradient changes of adjacent pixels in the image. Fig. 9(a) is a schematic diagram exemplarily showing a map obtained after the binarization process of the map based on the determined respective second regions, and fig. 9(b) is a schematic diagram of an outline corresponding to the map shown in fig. 9 (a).
Generating a contour map of the contour of each building in the map based on the gradient values of the adjacent pixel points in the map after the binarization processing, wherein the generating of the contour map of the contour of each building in the map specifically may include: and judging whether the gradient value of the adjacent pixel points in the map after binarization processing belongs to a preset gradient threshold value, if so, taking one pixel point in the adjacent pixel points as a pixel point on the outline of the building. The predetermined gradient threshold may be based on a value determined by a gradient change of black and white pixels.
In S305, the outline of each building in the outline map is modified according to a minimum rectangle coverage, for example, the length of the minimum rectangle is the maximum length of the outline of the building, and the width of the minimum rectangle is the maximum width of the outline of the building. For example, fig. 10 is a schematic view of a profile corrected from the profile shown in fig. 9 (b).
Wherein, from the corrected outline drawing, determining the designated position point of each side of the outline of each building may include: from the corrected outline map, the midpoints of the sides of the outline of each building are specified, and a plurality of third shot points are obtained. As shown in fig. 11, a dashed line frame in the figure is a modified outline of the obtained building outline, and a circle located at a midpoint of each side of the dashed line frame in fig. 11 is a third shot point.
Step S306 is the same as step S106, and is not described herein.
According to the unmanned aerial vehicle waypoint generating method provided by the embodiment of the invention, a map of an area with a preset color is obtained from a preset global map and is used as a map of a target area of a panoramic map to be established, each first area with the first color is determined based on image data of the map, a road map corresponding to the map is generated, and a plurality of first shooting points and a plurality of second shooting points in the road map are obtained; determining each second area with the second color based on the image data of the map, generating an outline map for representing the outline of each building in the map, and obtaining a plurality of third shooting points in the outline map; and acquiring longitude and latitude coordinates of each first shooting point, each second shooting point and each third shooting point to serve as the navigation point of the unmanned aerial vehicle. According to the scheme, each shooting point of the unmanned aerial vehicle can be automatically obtained from the map of the target area according to rules, the longitude and latitude coordinates of the shooting point of the unmanned aerial vehicle obtained in the map are sent to the unmanned aerial vehicle as the unmanned aerial vehicle navigation point, the navigation point does not need to be manually determined, and therefore the effect of automatically generating the unmanned aerial vehicle navigation point when the panoramic map is shot by the unmanned aerial vehicle is achieved.
Corresponding to the above method embodiment, an embodiment of the present invention further provides an unmanned aerial vehicle generation apparatus, which is applied to an electronic device, and as shown in fig. 12, the unmanned aerial vehicle generation apparatus includes:
the map acquisition module 1210 is used for acquiring a map of a target area of a panoramic map to be established;
a road map generating module 1220, configured to generate a road map corresponding to the map based on the image data of the map, where the road map is an image representing each road in the map;
a shot point identification module 1230, configured to identify a plurality of first shot points and a plurality of second shot points in the road map; the first shooting point is an intersection of a road, an inflection point of the road, a starting point of the road or an end point of the road; the second shooting point is a position point on the road in the road map except for a road starting point and a road ending point;
an outline map generation module 1240 for generating an outline map representing an outline of each building in the map based on the image data of the map;
a position point determining module 1250, configured to modify the contour of each building in the contour map according to a minimum rectangular coverage manner, and determine, from the modified contour map, designated position points of each side of the contour of each building to obtain a plurality of third shooting points;
and the coordinate obtaining module 1260 is used for obtaining longitude and latitude coordinates of each of the first shooting point, the second shooting point and the third shooting point to serve as the waypoint of the unmanned aerial vehicle.
The unmanned aerial vehicle waypoint generating device provided by the embodiment of the invention can generate a road map corresponding to the map based on the image data of the map by acquiring the map of a target region of a panoramic map to be established, so as to obtain a plurality of first shooting points and a plurality of second shooting points in the road map; generating an outline map for representing the outline of each building in the map based on the image data of the map, and obtaining a plurality of third shooting points in the outline map; and acquiring longitude and latitude coordinates of each first shooting point, each second shooting point and each third shooting point to serve as the navigation point of the unmanned aerial vehicle. According to the scheme, each shooting point of the unmanned aerial vehicle can be automatically obtained from the map of the target area according to rules, the longitude and latitude coordinates of the shooting point of the unmanned aerial vehicle obtained in the map are sent to the unmanned aerial vehicle as the unmanned aerial vehicle navigation point, the navigation point does not need to be manually determined, and therefore the effect of automatically generating the unmanned aerial vehicle navigation point when the panoramic map is shot by the unmanned aerial vehicle is achieved.
Optionally, in an implementation manner, the shot point identifying module 1230 may include:
and the equally dividing module is used for equally dividing each road in the road map according to a preset length to obtain a plurality of second shooting points, and each second shooting point is an equally dividing point.
Optionally, in an implementation manner, the location point determining module 1250 may include:
and the midpoint determining submodule is used for determining the midpoint of each side of the outline of each building from the corrected outline map to obtain a plurality of third shooting points.
Optionally, in an implementation manner, before the coordinate obtaining module obtains longitude and latitude coordinates of each of the first shooting point, the second shooting point, and the third shooting point, the apparatus further includes:
the first removing sub-module is used for removing one first shooting point from two first shooting points with the distance smaller than a first threshold value;
the second eliminating submodule is used for eliminating second shooting points, the distance between each second shooting point and any one first shooting point is smaller than a second threshold value;
and the third eliminating submodule is used for eliminating third shooting points of which the distance from any first shooting point or any second shooting point is smaller than a third threshold value from all third shooting points belonging to the short side of the outline of the building.
And the fourth elimination submodule is used for eliminating one of the two third shooting points which belong to different buildings and have the distance smaller than a fourth threshold value.
Optionally, in an implementation manner, the map obtaining module 1210 may include:
the map acquisition submodule is used for acquiring a map of an area with a preset color from a preset global map, and the map is used as a map of a target area of the panoramic map to be established; the preset colors comprise a first color and a second color, the first color is a color for marking each road, and the second color is a color for marking each building.
Optionally, on the basis that the apparatus includes a map obtaining sub-module, the road map generating module 1220 may include:
a first determination sub-module for determining respective first areas having the first color in map data of the map;
the first processing submodule is used for carrying out binarization processing on the map based on each determined first area to obtain a road map corresponding to the map;
wherein, based on the determined first areas, the binarization processing of the map comprises setting the first areas in the map to be white and setting areas outside the first areas to be black.
Optionally, on the basis that the apparatus includes a map obtaining sub-module, the contour map generating module 1240 may include:
a second determination sub-module for determining respective second areas having the second color in map data of the map;
the second processing submodule is used for carrying out binarization processing on the map based on the determined second areas and generating a contour map of the contour of each building in the map based on the gradient values of adjacent pixel points in the map after the binarization processing;
wherein the binarizing processing of the map based on the determined second areas includes setting the second areas in the map to be white and setting areas other than the second areas to be black.
An embodiment of the present invention further provides an electronic device, as shown in fig. 13, including a processor 1301, a communication interface 1302, a memory 1303, and a communication bus 1304, where the processor 1301, the communication interface 1302, and the memory 1303 complete mutual communication through the communication bus 1304,
a memory 1303 for storing a computer program;
the processor 1301 is configured to implement the method for generating the waypoint of the unmanned aerial vehicle provided by the embodiment of the present invention when the program stored in the memory 1303 is executed.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the above-mentioned unmanned aerial vehicle waypoint generating methods.
In yet another embodiment, a computer program product containing instructions is also provided, which when run on a computer, causes the computer to perform any of the above-described drone waypoint generation methods.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. An unmanned aerial vehicle waypoint generation method is applied to electronic equipment, and the method comprises the following steps:
acquiring a map of a target area of a panoramic map to be established;
generating a road map corresponding to the map based on the image data of the map, wherein the road map is an image used for representing each road in the map;
identifying a plurality of first shot points and a plurality of second shot points in the road map; the first shooting point is an intersection of a road, an inflection point of the road, a starting point of the road or an end point of the road; the second shooting point is a position point on the road in the road map except for a road starting point and a road ending point;
generating a contour map representing contours of respective buildings in the map based on image data of the map;
correcting the outline of each building in the outline map according to a minimum rectangular coverage mode, determining specified position points of each side of the outline of each building from the corrected outline map, and obtaining a plurality of third shooting points;
and acquiring longitude and latitude coordinates of each first shooting point, each second shooting point and each third shooting point to serve as the navigation point of the unmanned aerial vehicle.
2. The method of claim 1, wherein identifying the plurality of second shot points in the road map comprises:
and equally dividing each road in the road map according to a preset length to obtain a plurality of second shooting points, wherein each second shooting point is an equal division point.
3. The method according to claim 1, wherein determining the designated position points of each side of the outline of each building from the corrected outline map to obtain a plurality of third shot points comprises:
from the corrected outline map, the midpoints of the sides of the outline of each building are specified, and a plurality of third shot points are obtained.
4. The method of any one of claims 1-3, wherein before obtaining the longitude and latitude coordinates of each of the first shot point, the second shot point, and the third shot point, the method further comprises:
removing one first shooting point of two first shooting points with the distance smaller than a first threshold value;
removing second shooting points with the distance from any first shooting point smaller than a second threshold value from all the second shooting points;
removing third shooting points with the distance from any first shooting point or any second shooting point being smaller than a third threshold value from all third shooting points belonging to the short side of the outline of the building;
and one third shooting point of the two third shooting points which belong to different buildings and have the distance smaller than a fourth threshold value is removed.
5. The method according to any one of claims 1 to 3, wherein the obtaining of the map of the target area of the panoramic map to be built comprises:
acquiring a map of an area with a preset color from a preset global map, wherein the map is used as a map of a target area of a panoramic map to be established; the preset colors comprise a first color and a second color, the first color is a color for marking each road, and the second color is a color for marking each building.
6. The method of claim 5, wherein generating the map-based road map based on the image data of the map comprises:
determining, in map data of the map, respective first areas having the first color;
carrying out binarization processing on the map based on each determined first area to obtain a road map corresponding to the map;
wherein, based on the determined first areas, the binarization processing of the map comprises setting the first areas in the map to be white and setting areas outside the first areas to be black.
7. The method of claim 5, wherein generating a contour map representing contours of individual buildings in the map based on the image data of the map comprises:
determining, in map data of the map, respective second areas having the second color;
carrying out binarization processing on the map based on the determined second areas, and generating a contour map of the contour of each building in the map based on the gradient values of adjacent pixel points in the map after binarization processing;
wherein the binarizing processing of the map based on the determined second areas includes setting the second areas in the map to be white and setting areas other than the second areas to be black.
8. An unmanned aerial vehicle waypoint generation device, characterized in that is applied to electronic equipment, the device includes:
the map acquisition module is used for acquiring a map of a target area of the panoramic map to be established;
the road map generating module is used for generating a road map corresponding to the map based on the image data of the map, wherein the road map is an image used for representing each road in the map;
the shot point identification module is used for identifying a plurality of first shot points and a plurality of second shot points in the road map; the first shooting point is an intersection of a road, an inflection point of the road, a starting point of the road or an end point of the road; the second shooting point is a position point on the road in the road map except for a road starting point and a road ending point;
a contour map generation module for generating contour maps representing the contours of the buildings in the map based on the image data of the map;
the position point determining module is used for correcting the outline of each building in the outline map according to a minimum rectangular coverage mode, determining the appointed position points of each side of the outline of each building from the corrected outline map, and obtaining a plurality of third shooting points;
and the coordinate obtaining module is used for obtaining longitude and latitude coordinates of each first shooting point, each second shooting point and each third shooting point to serve as the navigation point of the unmanned aerial vehicle.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN201911219197.7A 2019-12-03 2019-12-03 Unmanned aerial vehicle waypoint generation method and device and electronic equipment Active CN110926475B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911219197.7A CN110926475B (en) 2019-12-03 2019-12-03 Unmanned aerial vehicle waypoint generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911219197.7A CN110926475B (en) 2019-12-03 2019-12-03 Unmanned aerial vehicle waypoint generation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110926475A CN110926475A (en) 2020-03-27
CN110926475B true CN110926475B (en) 2021-04-27

Family

ID=69847310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911219197.7A Active CN110926475B (en) 2019-12-03 2019-12-03 Unmanned aerial vehicle waypoint generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110926475B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111698422B (en) * 2020-06-10 2022-01-25 百度在线网络技术(北京)有限公司 Panoramic image acquisition method and device, electronic equipment and storage medium
CN111914047B (en) * 2020-07-16 2024-03-12 苏州数字地图信息科技股份有限公司 Geographic entity grid generation method, device and medium based on two-dimension code doorplate
CN113888635B (en) * 2021-09-29 2023-04-18 北京百度网讯科技有限公司 Visual positioning method and related device
CN115410104B (en) * 2022-09-16 2023-06-16 湖南胜云光电科技有限公司 Data processing system for acquiring image acquisition points of aircraft
CN115861039B (en) * 2022-11-21 2023-07-25 北京城市网邻信息技术有限公司 Information display method, device, equipment and medium
CN115620154B (en) * 2022-12-19 2023-03-07 江苏星湖科技有限公司 Panoramic map superposition replacement method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846472A (en) * 2016-12-02 2017-06-13 浙江宇视科技有限公司 A kind of method and device that photomap is generated based on panoramic table
CN107896317A (en) * 2017-12-01 2018-04-10 上海市环境科学研究院 Aircraft Aerial Images Integrated Processing Unit
CN108762293A (en) * 2018-04-11 2018-11-06 广州亿航智能技术有限公司 Sector scanning method, ground control station, unmanned plane and system based on unmanned plane
CN109117811A (en) * 2018-08-24 2019-01-01 颜俊君 A kind of system and method based on low-altitude remote sensing measuring technique estimation urban vegetation coverage rate
CN109741257A (en) * 2018-12-25 2019-05-10 鸿视线科技(北京)有限公司 Panorama sketch automatically shoots, splicing system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8947421B2 (en) * 2007-10-29 2015-02-03 Interman Corporation Method and server computer for generating map images for creating virtual spaces representing the real world

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846472A (en) * 2016-12-02 2017-06-13 浙江宇视科技有限公司 A kind of method and device that photomap is generated based on panoramic table
CN107896317A (en) * 2017-12-01 2018-04-10 上海市环境科学研究院 Aircraft Aerial Images Integrated Processing Unit
CN108762293A (en) * 2018-04-11 2018-11-06 广州亿航智能技术有限公司 Sector scanning method, ground control station, unmanned plane and system based on unmanned plane
CN109117811A (en) * 2018-08-24 2019-01-01 颜俊君 A kind of system and method based on low-altitude remote sensing measuring technique estimation urban vegetation coverage rate
CN109741257A (en) * 2018-12-25 2019-05-10 鸿视线科技(北京)有限公司 Panorama sketch automatically shoots, splicing system and method

Also Published As

Publication number Publication date
CN110926475A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN110926475B (en) Unmanned aerial vehicle waypoint generation method and device and electronic equipment
CN106997466B (en) Method and device for detecting road
CN107784654B (en) Image segmentation method and device and full convolution network system
CN104881860A (en) Positioning method and apparatus based on photographs
US20210209841A1 (en) Apparatus for building map using machine learning and image processing
CN105069453A (en) Image correction method and apparatus
CN111950543A (en) Target detection method and device
CN113436338A (en) Three-dimensional reconstruction method and device for fire scene, server and readable storage medium
JP2022541977A (en) Image labeling method, device, electronic device and storage medium
JP2006350553A (en) Corresponding point retrieval method, mutual location method, three-dimensional image measurement method, corresponding point retrieval device, mutual location device, three-dimensional image measurement device, corresponding point retrieval program and computer-readable recording medium with its program recorded
CN113658345A (en) Sample labeling method and device
CN112633114A (en) Unmanned aerial vehicle inspection intelligent early warning method and device for building change event
CN115410173B (en) Multi-mode fused high-precision map element identification method, device, equipment and medium
CN114611635B (en) Object identification method and device, storage medium and electronic device
CN113112551B (en) Camera parameter determining method and device, road side equipment and cloud control platform
CN112597788B (en) Target measuring method, target measuring device, electronic apparatus, and computer-readable medium
CN111738906B (en) Indoor road network generation method and device, storage medium and electronic equipment
CN114898321A (en) Method, device, equipment, medium and system for detecting road travelable area
CN110837789B (en) Method and device for detecting object, electronic equipment and medium
CN114419070A (en) Image scene segmentation method, device, equipment and storage medium
US20210103726A1 (en) Building footprint
CN112509028A (en) Method and apparatus for estimating window area
CN111383337A (en) Method and device for identifying objects
CN117437288B (en) Photogrammetry method, device, equipment and storage medium
CN114845055B (en) Shooting parameter determining method and device of image acquisition equipment and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant