CN116628115A - Semantic map database and semantic segmentation map generation method applied to unmanned aerial vehicle - Google Patents

Semantic map database and semantic segmentation map generation method applied to unmanned aerial vehicle Download PDF

Info

Publication number
CN116628115A
CN116628115A CN202310295798.6A CN202310295798A CN116628115A CN 116628115 A CN116628115 A CN 116628115A CN 202310295798 A CN202310295798 A CN 202310295798A CN 116628115 A CN116628115 A CN 116628115A
Authority
CN
China
Prior art keywords
semantic
map
unmanned aerial
aerial vehicle
virtual camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310295798.6A
Other languages
Chinese (zh)
Inventor
梁文斌
刘阳
卢云玲
祝宇
杨坤
王喆
徐宇
方琪鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Tengdun Technology Co Ltd
Original Assignee
Sichuan Tengdun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Tengdun Technology Co Ltd filed Critical Sichuan Tengdun Technology Co Ltd
Priority to CN202310295798.6A priority Critical patent/CN116628115A/en
Publication of CN116628115A publication Critical patent/CN116628115A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Instructional Devices (AREA)

Abstract

The application discloses a semantic map database and a semantic segmentation map generation method applied to an unmanned aerial vehicle, and relates to the field of unmanned aerial vehicle image processing, wherein the semantic map database generation method comprises the following steps: step S1: carrying out semantic segmentation on the scene according to the satellite map; step S2: storing the segmentation result in a tile form to form a semantic map database; based on a semantic map database, a semantic segmentation map generation method applied to the unmanned aerial vehicle is provided; according to the application, through tile-type storage similar to a satellite map, the real-time semantic segmentation of the unmanned aerial vehicle during flight is avoided, and the stability of the semantic segmentation is improved; meanwhile, the semantic map is projected onto an observation camera image plane to generate a semantic segmentation image in real time; the generated semantic segmentation image can directly inquire the position and the outline of any segmentation target under the world coordinate system.

Description

Semantic map database and semantic segmentation map generation method applied to unmanned aerial vehicle
Technical Field
The application relates to the field of unmanned aerial vehicle image processing, in particular to a semantic map database and a semantic segmentation map generation method applied to unmanned aerial vehicles.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
In the high-altitude flight process of the large unmanned aerial vehicle, semantic information of ground observation is sometimes required to be acquired; such as forests, deserts, cities, rivers, oceans, signage, etc., which may assist the drone in locating or participating in other intelligent decisions.
Most semantic segmentation methods are to transmit an observation image into a trained semantic segmentation network, and output the semantic segmentation image through the network. The method requires the semantic segmentation network to be fully trained, so that the data labeling workload is very large, and the network-based method has limited precision and real-time performance.
Furthermore, due to seasonal weather variations, the same semantic type of scene may be identified by the network as different semantic types at different time periods.
Disclosure of Invention
The application aims at: aiming at the problems in the background technology, the semantic map database and the semantic segmentation map generation method applied to the unmanned aerial vehicle are provided, and the semantic segmentation is avoided in real time when the unmanned aerial vehicle flies through tile-type storage similar to a satellite map, so that the stability of the semantic segmentation is improved; meanwhile, the semantic map is projected onto an observation camera image plane to generate a semantic segmentation image in real time; the generated semantic segmentation image can directly inquire the position and the outline of any segmentation target under the world coordinate system.
The technical scheme of the application is as follows:
the semantic map database generation method applied to the unmanned aerial vehicle comprises the following steps:
step S1: carrying out semantic segmentation on the scene according to the satellite map;
step S2: and storing the segmentation result in a tile form to form a semantic map database.
Further, the step S1 includes:
step S11: acquiring a plurality of levels of satellite maps corresponding to scenes;
step S12: semantic segmentation is carried out on each level of the satellite map respectively to obtain a plurality of semantic type areas; different semantic type regions are represented by different gray values.
Further, the step S2 includes:
and cutting the segmented satellite map into image tiles with specified pixels, and storing the image tiles to form a semantic map database.
Further, the step S12 includes:
corresponding semantic tags are respectively assigned to different semantic types;
the semantic tag type includes: uchar type and uint32 type.
The semantic segmentation map generation method applied to the unmanned aerial vehicle is based on the semantic map database generation method applied to the unmanned aerial vehicle, and comprises the following steps:
step A: according to GNSS coordinates of the unmanned aerial vehicle, image tiles of the corresponding areas are loaded from a semantic map database, and the semantic map is obtained through splicing;
and (B) step (B): constructing a virtual camera, wherein the image plane of the virtual camera is a semantic map with a length and width customized;
step C: and projecting pixel points in an observation image shot by the observation camera onto an image plane of the virtual camera one by one, and reading corresponding pixel values on the semantic map to generate a semantic segmentation map.
Further, the step B includes:
acquiring internal parameters of the virtual camera according to the assumed height of the virtual camera from the ground, the original size of the virtual camera, the pixel length and the pixel width of the local satellite map;
the virtual camera internal parameters include:
transverse focal lengthLongitudinal focal length->Horizontal and vertical pixel coordinates of the center point of the image plane +.>And->
Further, the step C includes:
step C1: under a virtual camera coordinate system, calculating a translation value from an observation camera center to a virtual camera center on a virtual camera depth normalization plane;
step C2: constructing a rotation matrix of the observation camera coordinate system relative to the virtual camera coordinate system by using the three-axis attitude angle matrix of the observation camera
Step C3: creating an image with the same size as the observed image as a container of the virtual observed image, projecting each pixel point coordinate of the observed image onto the image through transformation, reading a corresponding pixel value on a semantic map, filling the pixel value on the image, and generating a semantic segmentation map.
Further, the step C1 includes:
wherein:
t x 、t y representing a translation value of the observation camera center to the virtual camera center on the virtual camera depth normalization plane;
u c 、v c representing pixel point coordinates where the observation camera center is projected vertically onto the virtual camera image plane.
Further, the step C2 includes:
wherein:
a rotation matrix representing an observation camera coordinate system relative to a virtual camera coordinate system;
a rotation matrix representing the unmanned aerial vehicle body coordinate system relative to the observation camera coordinate system;
rpy img and the three-axis attitude angle of the unmanned aerial vehicle at the shooting moment of the observation camera is represented.
Further, the step C3 includes:
step C31: transforming the pixel point coordinates into distorted pixel point coordinates by observing camera distortion parameters;
step C32: projecting the distorted pixel point coordinates onto a depth normalization plane of an observation camera to obtain the depth normalization coordinates of the pixel points;
step C33: calculating the projection of the depth normalization coordinates of the pixel points in the gravity direction;
step C34: according to the projection result, determining the projection distance of the space point corresponding to the pixel point on the z axis of the observation camera coordinate system;
step C35: depth normalization according to pixel pointsThe coordinate of the space point corresponding to the pixel point is obtained by unifying the coordinate and the projection distance of the corresponding space point on the z-axis of the observation camera coordinate system
Step C36: using a rotation matrixWill->Transforming into a camera coordinate system of the virtual camera to obtain
Step C37: by using the internal parameters of the virtual camera, the methodProjecting the pixel point coordinates to a virtual camera image plane to obtain corresponding pixel point coordinates;
step C38: and C37, reading a pixel value corresponding to the pixel point coordinate obtained in the step C37 on the semantic map, and filling the pixel value into the pixel point coordinate position of the virtual observation image.
Compared with the prior art, the application has the beneficial effects that:
the method is applied to a semantic map database and a semantic segmentation map generation method of an unmanned aerial vehicle, the unmanned aerial vehicle obtains a semantic segmentation result by directly consulting the database in the flight process, and the semantic segmentation result is more stable and quicker than real-time semantic segmentation, and the method of the patent requires less than 3ms for generating a semantic segmentation map on an Inlet-Weida AGX Orin embedded computer platform without using GPU acceleration. And has stable segmentation results for any season and weather change, such as snow, thick fog and the like.
Drawings
FIG. 1 is a flow chart of a semantic map database generation method applied to an unmanned aerial vehicle;
FIG. 2 is a flow chart of a semantic segmentation graph generation method applied to an unmanned aerial vehicle;
FIG. 3 is an illustration of a satellite ground map;
FIG. 4 is an illustration of a satellite earth plot after semantic segmentation;
FIG. 5 is a schematic illustration of a tiled memory of a semantic segmentation map;
FIG. 6 is a diagram illustrating the generation of semantic segmentation from a semantic map database;
fig. 7 is a schematic diagram of a semantic map of an unmanned aerial vehicle projected onto an observation camera to generate a semantic segmentation map based on a virtual camera;
reference numerals: 1-virtual camera, 2-observation camera, 3-virtual camera image plane, 4-projection area of observation image of observation camera on virtual camera image plane.
Detailed Description
It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The features and capabilities of the present application are described in further detail below in connection with examples.
Example 1
Referring to fig. 1, the semantic map database generation method applied to the unmanned aerial vehicle includes:
step S1: carrying out semantic segmentation on the scene according to the satellite map;
step S2: and storing the segmentation result in a tile form to form a semantic map database.
In this embodiment, specifically, the step S1 includes:
step S11: acquiring a plurality of levels of satellite maps corresponding to scenes; it should be noted that, the satellite map has different levels, in this embodiment, the 15 th, 17 th and 19 th levels of the satellite map are used to generate three levels of semantic map databases, and then the semantic map databases are generated offline; and the so-called scene, i.e. the region size; the satellite map is shown in fig. 3;
step S12: semantic segmentation is carried out on each level of the satellite map respectively to obtain a plurality of semantic type areas; different semantic type regions, represented by different gray values; preferably, the segmentation can be performed by manual or deep learning networks; for example, as shown in fig. 4, the regions with three semantic types of cities, rivers and forests are obtained by segmentation and are represented by different gray values;
it should be noted that the satellite map semantic annotation of different levels has different precision;
if the 15 th level can only mark a wider river, a large-area city and the like;
the 17 th level can mark a narrower river, a small-area urban area and the like;
level 19 may label finer semantic divisions of a road, park, landmark building, etc.
In this embodiment, specifically, the step S2 includes:
cutting the segmented satellite map into image tiles with specified pixels and storing the image tiles so as to form a semantic map database; preferably, as shown in fig. 5, the segmented satellite map is cut into image tiles of 256×256 pixels for storage; the tile index is the same as the satellite map of the corresponding hierarchy and corresponding region.
In this embodiment, specifically, the step S12 includes:
corresponding semantic tags are respectively assigned to different semantic types;
the semantic tag type includes: uchar type and uint32 type;
namely, in the embodiment, according to the different number of semantic tags, the semantic tags are stored by using data types with different byte lengths;
for the semantic map corresponding to the 15 th and 17 th layers, the number of labels is small, one uchar data type is used as a semantic label, namely, each pixel of the semantic map is one uchar type, and at most 256 different labels can be defined;
for the semantic map of the 19 th level, since a large number of landmark buildings or various landmarks are marked, the number of semantic tags used is greatly increased, and the uint32 data type is used as the semantic tag, so that 4294967295 different semantic types can be defined at most.
Example two
The second embodiment proposes a semantic map generating method applied to an unmanned aerial vehicle based on the semantic map database generated by the semantic map database generating method applied to an unmanned aerial vehicle proposed in the first embodiment, and the semantic segmentation map can be generated by projecting the semantic map onto the observation camera image plane on the assumption that the position and the posture of the unmanned aerial vehicle are known, as shown in fig. 6.
Referring to fig. 2, the semantic segmentation map generating method applied to the unmanned aerial vehicle includes:
step A: according to GNSS coordinates of the unmanned aerial vehicle, image tiles of the corresponding areas are loaded from a semantic map database, and the semantic map is obtained through splicing;
and (B) step (B): constructing a virtual camera, wherein the image plane of the virtual camera is a semantic map with a length and width customized;
step C: and projecting pixel points in an observation image shot by the observation camera onto an image plane of the virtual camera one by one, and reading corresponding pixel values on the semantic map to generate a semantic segmentation map.
In this embodiment, specifically, the step B includes:
acquiring internal parameters of the virtual camera according to the assumed height of the virtual camera from the ground, the original size of the virtual camera, the pixel length and the pixel width of the local satellite map;
the virtual camera internal parameters include:
transverse focal lengthLongitudinal focal length->Horizontal and vertical pixel coordinates of the center point of the image plane +.>And->
As shown in fig. 7, 1 denotes a virtual camera, 2 denotes an observation camera, 3 denotes a virtual camera image plane, and 4 denotes a projection area of an observation image of the observation camera on the virtual camera image plane.
In this embodiment, specifically, the step C includes:
step C1: under a virtual camera coordinate system, calculating a translation value from an observation camera center to a virtual camera center on a virtual camera depth normalization plane; it should be noted that, the virtual camera center is represented as a center point of the virtual camera image plane, that is, a center point of a corresponding tile on the semantic map where the GNSS coordinates of the unmanned aerial vehicle fall;
step C2: constructing a rotation matrix of the observation camera coordinate system relative to the virtual camera coordinate system by using the three-axis attitude angle matrix of the observation camera
Step C3: creating an image with the same size as the observed image as a container of the virtual observed image, projecting each pixel point coordinate of the observed image onto the image through transformation, reading a corresponding pixel value on a semantic map, filling the pixel value on the image, and generating a semantic segmentation map.
In this embodiment, specifically, the step C1 includes:
wherein:
t x 、t y representing a translation value of the observation camera center to the virtual camera center on the virtual camera depth normalization plane;
u c 、v c and the pixel point coordinates of the vertical projection of the center of the observation camera onto the image plane of the virtual camera are represented, namely, the point of the vertical projection of the GNSS coordinates of the unmanned aerial vehicle on the ground at the shooting moment of the observation camera is represented, and then the pixel coordinates of the vertical projection of the center of the observation camera onto the image plane of the virtual camera are represented.
In this embodiment, specifically, the step C2 includes:
wherein:
a rotation matrix representing an observation camera coordinate system relative to a virtual camera coordinate system;
a rotation matrix representing the unmanned aerial vehicle body coordinate system relative to the observation camera coordinate system;
rpy img representing three-axis attitude angles of the unmanned aerial vehicle at shooting moment of the observation camera;
r represents a function to convert the three-axis attitude angle into a three-dimensional rotation matrix.
In this embodiment, specifically, the step C3 includes:
step C31: transforming the pixel point coordinates into distorted pixel point coordinates by observing camera distortion parameters; in this embodiment only radial distortion is considered, tangential distortion may be added in other embodiments; the pixel point coordinates are pixel coordinates which do not consider the distortion effect;
specifically, the step C31 includes:
wherein:
k 1 、k 2 、k 3 to observe camera distortion parameters;
r 2 =(u i -c x ) 2 +(v i -c y ) 2 ,(c x ,c y ) The pixel coordinates of the central point of the camera are observed;
(u i ,v i ) The coordinates of the pixel points;
(u' i ,v' i ) The pixel point coordinates after being distorted;
step C32: projecting the distorted pixel point coordinates onto a depth normalization plane of an observation camera to obtain the depth normalization coordinates of the pixel points;
specifically, the step C32 includes:
wherein:
normalizing coordinates for the depth of the pixel points;
K c is an internal reference of an observation camera;
it should be noted that the number of the substrates,is a three-dimensional vector;
step C33: calculating the projection of the depth normalization coordinates of the pixel points in the gravity direction;
specifically, the step C33 includes:
wherein:
g c representing the direction of gravity under the coordinate system of the observation camera;
step C34: according to the projection result, determining the projection distance of the space point corresponding to the pixel point on the z axis of the observation camera coordinate system;
specifically, the step C34 includes:
if θ is smaller than the threshold (0.001 in this embodiment), it is indicated that the pixel is never possible to project onto the ground plane (the projection direction vector is parallel to the ground or points to the sky), and the pixel value of the pixel is set to 0, which indicates no semantic annotation;
if theta is greater than the threshold value, assuming the ground is a horizontal plane, calculating the projection distance of the corresponding space point of the pixel point on the z axis of the observation camera coordinate system
Wherein:
h c indicating that the observed camera field is high;
step C35: obtaining coordinates of the space point corresponding to the pixel point under the observation camera coordinate system according to the depth normalized coordinates of the pixel point and the projection distance of the corresponding space point on the z-axis of the observation camera coordinate system
Specifically, the step C35 includes:
step C36: using a rotation matrixWill->Transforming into a camera coordinate system of the virtual camera to obtain
Specifically, the step C36 includes:
step C37: by using the internal parameters of the virtual camera, the methodProjecting the pixel point coordinates to a virtual camera image plane to obtain corresponding pixel point coordinates;
specifically, the step C37 includes:
wherein:
representing corresponding pixel coordinates;
step C38: reading a pixel value corresponding to the pixel point coordinate obtained in the step C37 on the semantic map, namely, a semantic segmentation label, and filling the pixel point coordinate into the pixel point coordinate position of the virtual observation image; and traversing the coordinates of each pixel point to read the corresponding pixel value on the semantic map, so as to generate the semantic segmentation map.
After the semantic segmentation map is obtained by projection, the unmanned aerial vehicle has two corresponding images. One is an actual observation image and one is a semantic segmentation image obtained from a semantic map database. In practical applications, the observation camera can only observe a part of a connected semantic block, such as a part of a forest. If the unmanned aerial vehicle wants to acquire all information of the semantic block, such as the size, the contour range and the like of a whole forest, the unmanned aerial vehicle can also acquire the information by querying a semantic map database. This information is stored in the upper level folder of the semantic map tiles, in txt format. In the txt file, all information of the connected semantic blocks of different levels corresponding to the pixel position can be searched out through the tile index number and the pixel point coordinates in the tile.
The above examples merely illustrate specific embodiments of the application, which are described in more detail and are not to be construed as limiting the scope of the application. It should be noted that it is possible for a person skilled in the art to make several variants and modifications without departing from the technical idea of the application, which fall within the scope of protection of the application.
This background section is provided to generally present the context of the present application and the work of the presently named inventors, to the extent it is described in this background section, as well as the description of the present section as not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present application.

Claims (10)

1. The semantic map database generation method applied to the unmanned aerial vehicle is characterized by comprising the following steps of:
step S1: carrying out semantic segmentation on the scene according to the satellite map;
step S2: and storing the segmentation result in a tile form to form a semantic map database.
2. The method for generating a semantic map database applied to an unmanned aerial vehicle according to claim 1, wherein the step S1 comprises:
step S11: acquiring a plurality of levels of satellite maps corresponding to scenes;
step S12: semantic segmentation is carried out on each level of the satellite map respectively to obtain a plurality of semantic type areas; different semantic type regions are represented by different gray values.
3. The method for generating a semantic map database applied to an unmanned aerial vehicle according to claim 2, wherein the step S2 comprises:
and cutting the segmented satellite map into image tiles with specified pixels, and storing the image tiles to form a semantic map database.
4. A semantic map database generating method applied to an unmanned aerial vehicle according to claim 3, wherein the step S12 comprises:
corresponding semantic tags are respectively assigned to different semantic types;
the semantic tag type includes: uchar type and uint32 type.
5. The semantic segmentation map generation method applied to the unmanned aerial vehicle is characterized by comprising the following steps of:
step A: according to GNSS coordinates of the unmanned aerial vehicle, image tiles of the corresponding areas are loaded from a semantic map database, and the semantic map is obtained through splicing;
and (B) step (B): constructing a virtual camera, wherein the image plane of the virtual camera is a semantic map with a length and width customized;
step C: and projecting pixel points in an observation image shot by the observation camera onto an image plane of the virtual camera one by one, and reading corresponding pixel values on the semantic map to generate a semantic segmentation map.
6. The method for generating a semantic segmentation map for an unmanned aerial vehicle according to claim 5, wherein step B comprises:
acquiring internal parameters of the virtual camera according to the assumed height of the virtual camera from the ground, the original size of the virtual camera, the pixel length and the pixel width of the local satellite map;
the virtual camera internal parameters include:
transverse focal lengthLongitudinal focal length->Horizontal and vertical pixel coordinates of the center point of the image plane +.>And->
7. The method for generating a semantic segmentation map for an unmanned aerial vehicle according to claim 6, wherein step C comprises:
step C1: under a virtual camera coordinate system, calculating a translation value from an observation camera center to a virtual camera center on a virtual camera depth normalization plane;
step C2: constructing a rotation matrix of the observation camera coordinate system relative to the virtual camera coordinate system by using the three-axis attitude angle matrix of the observation camera
Step C3: creating an image with the same size as the observed image as a container of the virtual observed image, projecting each pixel point coordinate of the observed image onto the image through transformation, reading a corresponding pixel value on a semantic map, filling the pixel value on the image, and generating a semantic segmentation map.
8. The method for generating a semantic segmentation map for an unmanned aerial vehicle according to claim 7, wherein the step C1 comprises:
wherein:
t x 、t y representing a translation value of the observation camera center to the virtual camera center on the virtual camera depth normalization plane;
u c 、v c representing pixel point coordinates where the observation camera center is projected vertically onto the virtual camera image plane.
9. The method for generating a semantic segmentation map for an unmanned aerial vehicle according to claim 8, wherein the step C2 comprises:
wherein:
a rotation matrix representing an observation camera coordinate system relative to a virtual camera coordinate system;
a rotation matrix representing the unmanned aerial vehicle body coordinate system relative to the observation camera coordinate system;
rpy img and the three-axis attitude angle of the unmanned aerial vehicle at the shooting moment of the observation camera is represented.
10. The method for generating a semantic segmentation map for an unmanned aerial vehicle according to claim 9, wherein the step C3 comprises:
step C31: transforming the pixel point coordinates into distorted pixel point coordinates by observing camera distortion parameters;
step C32: projecting the distorted pixel point coordinates onto a depth normalization plane of an observation camera to obtain the depth normalization coordinates of the pixel points;
step C33: calculating the projection of the depth normalization coordinates of the pixel points in the gravity direction;
step C34: according to the projection result, determining the projection distance of the space point corresponding to the pixel point on the z axis of the observation camera coordinate system;
step C35: obtaining coordinates of the space point corresponding to the pixel point under the observation camera coordinate system according to the depth normalized coordinates of the pixel point and the projection distance of the corresponding space point on the z-axis of the observation camera coordinate system
Step C36: using a rotation matrixWill->Transforming into the camera coordinate system of the virtual camera to obtain +.>
Step C37: by using the internal parameters of the virtual camera, the methodProjecting the pixel point coordinates to a virtual camera image plane to obtain corresponding pixel point coordinates;
step C38: and C37, reading a pixel value corresponding to the pixel point coordinate obtained in the step C37 on the semantic map, and filling the pixel value into the pixel point coordinate position of the virtual observation image.
CN202310295798.6A 2023-03-24 2023-03-24 Semantic map database and semantic segmentation map generation method applied to unmanned aerial vehicle Pending CN116628115A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310295798.6A CN116628115A (en) 2023-03-24 2023-03-24 Semantic map database and semantic segmentation map generation method applied to unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310295798.6A CN116628115A (en) 2023-03-24 2023-03-24 Semantic map database and semantic segmentation map generation method applied to unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN116628115A true CN116628115A (en) 2023-08-22

Family

ID=87625458

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310295798.6A Pending CN116628115A (en) 2023-03-24 2023-03-24 Semantic map database and semantic segmentation map generation method applied to unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN116628115A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116817892A (en) * 2023-08-28 2023-09-29 之江实验室 Cloud integrated unmanned aerial vehicle route positioning method and system based on visual semantic map

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116817892A (en) * 2023-08-28 2023-09-29 之江实验室 Cloud integrated unmanned aerial vehicle route positioning method and system based on visual semantic map
CN116817892B (en) * 2023-08-28 2023-12-19 之江实验室 Cloud integrated unmanned aerial vehicle route positioning method and system based on visual semantic map

Similar Documents

Publication Publication Date Title
CN111928862B (en) Method for on-line construction of semantic map by fusion of laser radar and visual sensor
CN109631855B (en) ORB-SLAM-based high-precision vehicle positioning method
US20210390329A1 (en) Image processing method, device, movable platform, unmanned aerial vehicle, and storage medium
CN108763287B (en) Construction method of large-scale passable regional driving map and unmanned application method thereof
AU2007355942B2 (en) Arrangement and method for providing a three dimensional map representation of an area
CN110648389A (en) 3D reconstruction method and system for city street view based on cooperation of unmanned aerial vehicle and edge vehicle
CN109633665A (en) The sparse laser point cloud joining method of traffic scene
CN115077556B (en) Unmanned vehicle field operation path planning method based on multi-dimensional map
CN111915517B (en) Global positioning method suitable for RGB-D camera under indoor illumination unfavorable environment
CN114241464A (en) Cross-view image real-time matching geographic positioning method and system based on deep learning
CN115331130B (en) Unmanned aerial vehicle inspection method based on geographical marker assisted navigation and unmanned aerial vehicle
CN111256696A (en) Aircraft autonomous navigation method with multi-feature and multi-level scene matching
CN115861591B (en) Unmanned aerial vehicle positioning method based on transformer key texture coding matching
CN116628115A (en) Semantic map database and semantic segmentation map generation method applied to unmanned aerial vehicle
CN114509065A (en) Map construction method, map construction system, vehicle terminal, server side and storage medium
CN113838129B (en) Method, device and system for obtaining pose information
KR20220150170A (en) Drone used 3d mapping method
CN113836251B (en) Cognitive map construction method, device, equipment and medium
KR102557775B1 (en) Drone used 3d mapping method
Wang et al. A simple deep learning network for classification of 3D mobile LiDAR point clouds
CN115375766A (en) Unmanned aerial vehicle urban autonomous positioning method based on semantic map
CN114283397A (en) Global relocation method, device, equipment and storage medium
Fuller Aerial photographs as records of changing vegetation patterns
Choi et al. Automatic Construction of Road Lane Markings Using Mobile Mapping System Data.
CN117170501B (en) Visual tracking method based on point-line fusion characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination