CN117456121A - Topographic map acquisition and drawing method and device without camera - Google Patents
Topographic map acquisition and drawing method and device without camera Download PDFInfo
- Publication number
- CN117456121A CN117456121A CN202311420408.XA CN202311420408A CN117456121A CN 117456121 A CN117456121 A CN 117456121A CN 202311420408 A CN202311420408 A CN 202311420408A CN 117456121 A CN117456121 A CN 117456121A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- dimensional model
- dem
- cloud data
- generating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000005286 illumination Methods 0.000 claims abstract description 36
- 238000012876 topography Methods 0.000 claims description 58
- 238000013507 mapping Methods 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 17
- 238000007689 inspection Methods 0.000 claims description 15
- 230000015654 memory Effects 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 14
- 238000003860 storage Methods 0.000 claims description 14
- 238000009877 rendering Methods 0.000 claims description 11
- 238000007781 pre-processing Methods 0.000 claims description 10
- 230000008094 contradictory effect Effects 0.000 claims description 7
- 238000012952 Resampling Methods 0.000 claims description 6
- 238000007730 finishing process Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004040 coloring Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000000149 penetrating effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005056 compaction Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The disclosure provides a method and a device for collecting and drawing a topographic map without a camera, wherein the method comprises the following steps: acquiring point cloud data of a target operation area based on a crisscross route; generating a point cloud three-dimensional model according to the point cloud intensity of the point cloud data; generating a Digital Elevation Model (DEM) according to the point cloud data; generating a DEM three-dimensional model according to the illumination direction and the illumination angle of the DEM; and drawing a topographic map according to the point cloud data, the point cloud three-dimensional model and the DEM three-dimensional model, so that the problems of low drawing efficiency, and the accuracy and precision of drawing the topographic map, which are influenced by the technical proficiency of operators, in the related technology are solved.
Description
Technical Field
The disclosure belongs to the technical field of topographic map drawing, and more particularly relates to a topographic map collecting and drawing method and device without a camera.
Background
Topography refers to a projection of the topography and geographic location, shape on a horizontal plane. The topographic map is an important geographical information display means and is an indispensable reference basis in a plurality of fields such as engineering construction, planning and site selection, natural resource monitoring and the like. In the related art, there are problems in that the drawing efficiency is low, and the accuracy and precision of drawing the topographic map are affected by the technical proficiency of the operator.
Disclosure of Invention
The invention aims to provide a topographic map collecting and drawing method and device without a camera, so as to solve the problems that drawing efficiency is low, and accuracy and precision of drawing topographic maps are affected by technical proficiency of operators in the related art.
In a first aspect of an embodiment of the present disclosure, a method for collecting and drawing a topographic map without a camera is provided, including:
acquiring point cloud data of a target operation area based on a crisscross route;
generating a point cloud three-dimensional model according to the point cloud intensity of the point cloud data;
generating a Digital Elevation Model (DEM) according to the point cloud data;
generating a DEM three-dimensional model according to the illumination direction and the illumination angle of the DEM;
and drawing a topographic map according to the point cloud data, the point cloud three-dimensional model and the DEM three-dimensional model.
In an exemplary embodiment of the present disclosure, the generating a three-dimensional model of the point cloud according to the point cloud intensity of the point cloud data includes:
generating a point cloud orthographic image according to the point cloud intensity of the point cloud data;
constructing a triangle network according to the point cloud data;
resampling the point cloud data based on the triangular network to generate a three-dimensional model;
and performing texture mapping on the three-dimensional model according to the point cloud orthographic image to generate the point cloud three-dimensional model.
In an exemplary embodiment of the present disclosure, the generating a DEM three-dimensional model according to the illumination direction and the illumination angle of the DEM includes:
generating an orthographic image of the DEM according to the illumination direction and the illumination angle of the DEM;
and generating the DEM three-dimensional model according to the orthographic image of the DEM.
In an exemplary embodiment of the present disclosure, the drawing a topography from the point cloud data, the point cloud three-dimensional model, and the DEM three-dimensional model includes:
performing ground object drawing according to the point cloud three-dimensional model;
performing terrain drawing according to the DEM three-dimensional model;
generating a grid according to the point cloud data;
generating contour lines according to contour lines of the grids;
acquiring point cloud key points of the point cloud data, and converting the point cloud key points into elevation points;
and drawing a topographic map according to the ground feature, the topography, the Gao Chengdian and the elevation points.
In an exemplary embodiment of the present disclosure, mapping the terrain from the ground feature, the terrain, the Gao Chengdian, and the elevation points includes:
performing topology inspection on the contour lines and the elevation points;
drawing an initial topography according to the ground feature, the contour points and the elevation points after the topography and topology inspection;
and carrying out finishing treatment on the initial topographic map and adding profile information to generate the topographic map.
In one exemplary embodiment of the present disclosure, the finishing process includes processing contradictory points of the contour lines and the ground feature, truncating the contour lines, and noting.
In an exemplary embodiment of the present disclosure, before generating the point cloud three-dimensional model according to the point cloud intensity of the point cloud data, the camera-free topographic map collecting and rendering method further includes:
preprocessing the point cloud data; the preprocessing comprises ribbon splicing, ribbon adjustment, ground point classification and point cloud quality inspection.
In a second aspect of the embodiments of the present disclosure, there is provided a topographic map acquisition and rendering apparatus without a camera, including:
the point cloud data acquisition module is used for acquiring point cloud data of a target operation area based on a crisscross route;
the point cloud three-dimensional model generation module is used for generating a point cloud three-dimensional model according to the point cloud intensity of the point cloud data;
the DEM three-dimensional model generation module is used for generating a digital elevation model DEM according to the point cloud data and generating a DEM three-dimensional model according to the illumination direction and the illumination angle of the DEM;
and the topographic map drawing module is used for drawing a topographic map according to the point cloud data, the point cloud three-dimensional model and the DEM three-dimensional model.
A third aspect of the disclosed embodiments provides an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the camera-free topography acquisition mapping method as described in the first aspect above when executing the computer program.
In a fourth aspect of the embodiments of the present disclosure, there is provided a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the camera-free topography acquisition mapping method as described in the first aspect above.
The topographic map collecting and drawing method and device without cameras, which are provided by the embodiment of the disclosure, have the beneficial effects that:
according to the embodiment of the disclosure, the point cloud data of the target operation area is acquired based on the cross route of the unmanned aerial vehicle, the number of the side point clouds of the building is effectively increased, and the point cloud data acquisition equipment penetrates through vegetation to irradiate the number of the point clouds on the ground, so that the topographic map is drawn more accurately, staff participation is not needed in the process, and the problem that the accuracy and precision of drawing the topographic map in the related art are influenced by the technical proficiency of the staff is solved. Meanwhile, according to the embodiment of the disclosure, the point cloud data is converted from the discontinuous data into the continuous grid data by generating the point cloud three-dimensional model, so that the point cloud data is easier to capture in the process of drawing the topographic map, the drawing efficiency of the topographic map is improved, the speed of identifying the topographic map under vegetation is improved by generating the DEM three-dimensional model, the drawing efficiency of the topographic map is further improved, and the problem of low drawing efficiency in the related technology is solved. And in addition, unlike the related art, the unmanned aerial vehicle in the embodiment of the disclosure does not need to carry a camera, so that the cost is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are required for the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a schematic flow chart of a method for collecting and drawing a topographic map without a camera according to an embodiment of the disclosure;
FIG. 2 is a block diagram of a camera-free topographic map collecting and rendering device according to an embodiment of the present disclosure;
fig. 3 is a schematic block diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
For the purposes of promoting an understanding of the principles and advantages of the disclosure, reference will now be made to the embodiments illustrated in the drawings.
The topographic map is an important geographical information display means. The method is an indispensable reference basis for a plurality of fields such as engineering construction, planning and site selection, natural resource monitoring and the like. When the related technology is used for drawing a topographic map, two modes are generally adopted, namely, a total-time imaging (RTK) or Real-time differential positioning (Real-time differential positioning) is utilized for surveying and mapping topographic feature points on the spot, the interior industry is used for simulating and drawing the spot topographic map according to the acquired feature points on the outside industry, and an aerial photography mode is adopted for acquiring an on-site DOM (Digital Orthophoto Map ) and a DEM (Digital Elevation Model, digital elevation model) and making a stereo pair or establishing a three-dimensional model. However, the first method has the problems that the drawing efficiency is low, the area which cannot be reached by the operator cannot be drawn, the accuracy and precision of the drawing of the topographic map are affected by the technical proficiency of the operator, and the second method has the problems that compared with the first method, the efficiency is improved, but the workload of the internal industry is obviously increased, the process is complicated, and the topography under vegetation cannot be reflected.
Aiming at the problems, the method for acquiring the point cloud data based on the station scanner, performing splicing processing on the point cloud data and drawing the topographic map by using CASS software solves the problems that the single-point measurement efficiency of the topographic map field personnel is low and the GNSS equipment is blocked and cannot be acquired. However, the method still has the problems of low field collection efficiency, more ground buildings, serious scanning limitation in dense vegetation areas and the like. Meanwhile, the method needs to manually collect characteristic points, targets and the like in advance, and is not suitable for large-scale topographic map drawing. In addition, the method can solve the problems of difficulty in capturing point cloud and the like when the topographic map is drawn through the point cloud, and the working efficiency of the interior industry is affected.
Another solution is a method for acquiring point cloud and orthographic image and making a topographic map based on an onboard LIDAR, wherein the method utilizes an unmanned aerial vehicle to complete acquisition of the point cloud and orthographic image and then utilizes EPS to construct a three-dimensional model so as to produce the topographic map. Although the field efficiency of the method is improved, the flight efficiency is still lower because the pure point cloud overlapping rate is required to be 25% -30% and the overlapping rate of the orthographic image acquisition is not less than 65%. Meanwhile, although the three-dimensional model is manufactured, the nature of the three-dimensional model is that a topographic map is drawn according to an orthographic image, and the orthographic image is a two-dimensional image, so that the three-dimensional level of the ground object, such as the direction of an electric wire, cannot be reflected, and the topography under vegetation cannot be reflected.
In order to solve the above-mentioned problems, an embodiment of the present disclosure provides a method for collecting and drawing a topographic map without a camera, referring to fig. 1 (fig. 1 is a schematic flow chart of a method for collecting and drawing a topographic map without a camera according to an embodiment of the present disclosure), which includes:
s101: and acquiring point cloud data of the target working area based on the crisscross airlines.
S102: and generating a point cloud three-dimensional model according to the point cloud intensity of the point cloud data.
S103: a digital elevation model DEM is generated from the point cloud data.
S104: and generating a DEM three-dimensional model according to the illumination direction and the illumination angle of the DEM.
S105: and drawing a topographic map according to the point cloud data, the point cloud three-dimensional model and the DEM three-dimensional model.
The embodiments of the present disclosure have the advantages over the related art:
1. according to the embodiment of the disclosure, the point cloud data of the target operation area are acquired based on the cross airlines, so that the number of the side point clouds of the building is effectively increased, meanwhile, as vegetation gaps are different in pointing direction, the number of the point clouds of the ground irradiated by vegetation penetrating through the point cloud data acquisition equipment can be effectively increased by adopting the cross airlines, so that a topographic map is drawn more accurately, staff participation is not needed in the process, and the problem that the accuracy and precision of drawing the topographic map in the related art are influenced by the technical proficiency of the staff is solved.
2. According to the embodiment of the disclosure, the point cloud data are the non-continuous data, so that the problem that points jump and the like easily occur in the process of drawing the topographic map, and the efficiency of drawing the topographic map is low.
3. According to the embodiment of the disclosure, the DEM three-dimensional model is generated according to the illumination direction and the illumination angle of the DEM, the fluctuation condition of the topography and landform under the vegetation can be effectively identified, the speed of identifying the topography and landform under the vegetation is improved, and the problem that the topography and landform under the vegetation cannot be reflected in the related technology is solved.
4. The unmanned aerial vehicle in the embodiment of the disclosure does not need to carry a camera, and reduces the cost.
In one exemplary embodiment of the present disclosure, generating a point cloud three-dimensional model from point cloud intensities of point cloud data includes:
and generating a point cloud orthographic image according to the point cloud intensity of the point cloud data.
And constructing a triangle network according to the point cloud data.
And resampling the point cloud data based on the triangular network to generate a three-dimensional model.
And performing texture mapping on the three-dimensional model according to the point cloud orthographic image to generate a point cloud three-dimensional model.
In the present exemplary embodiment, generating a point cloud orthographic image from the point cloud intensities of the point cloud data includes adjusting the size of the point cloud data, coloring the point cloud data with the point cloud intensities, and enhancing the data with shadows. The point cloud data processing software is utilized in the present exemplary embodiment to generate a point cloud orthographic image from the point cloud intensities of the point cloud data. For example, the TRIBLE REALWORK software is used for loading point cloud data, the size of the point cloud data is adjusted to be 2-3 pixels, meanwhile, the color intensity is selected by the point cloud coloring, the shadow is selected by the enhancement effect, and finally, the point cloud orthophoto map is derived.
In the present exemplary embodiment, a triangle network may be constructed from the point cloud data using the point cloud data processing software and resampling of the point cloud data based on the triangle network may be performed to generate the three-dimensional model. For example, the triangle network is constructed by loading point cloud data by using the TRIMBLE real software, resampling the point cloud data to a resolution of about 0.1 meter, and finally deriving a three-dimensional model in OBJ format (a text format).
In the present exemplary embodiment, the three-dimensional model may be texture mapped using modeling software from a point cloud orthographic image to generate a point cloud three-dimensional model. For example, a three-dimensional model is loaded by using 3D Resupper software, the three-dimensional model is segmented, texture mapping is carried out by using a point cloud orthophoto map block by block, and finally, the point cloud three-dimensional model in the textured OBJ format is derived.
As can be seen from analysis of the present exemplary embodiment, the present exemplary embodiment compensates for the problem of insufficient texture caused by the inorganic image of the present exemplary embodiment by generating a point cloud orthographic image map according to the point cloud intensity of the point cloud data. The point cloud data is converted into the three-dimensional model, so that the point cloud data is converted from the discontinuous data to the continuous grid, the problem that jump points are easy to occur in the process of drawing the topographic map caused by the discontinuity of the point cloud data is solved, the three-dimensional model of the point cloud is more visual relative to the point cloud data, and the drawing efficiency of the topographic map is remarkably improved.
In one exemplary embodiment of the present disclosure, generating the digital elevation model DEM from point cloud data may be implemented using point cloud data processing analysis software. For example, the point cloud data is loaded with the LDIAR360 software, the resolution is set to 0.1 meter, and the DEM in TIF format (Tag Image File Format, label image file format) is generated and exported.
In an exemplary embodiment of the present disclosure, generating a DEM three-dimensional model from an illumination direction, an illumination angle of the DEM includes:
and generating an orthographic image of the DEM according to the illumination direction and the illumination angle of the DEM.
And generating a DEM three-dimensional model according to the orthographic image of the DEM.
In the present exemplary embodiment, an orthophotomap of the DEM may be generated from the illumination direction and illumination angle of the DEM using mapping software. For example, the GLOBAL map software is used to load the DEM, adjust the illumination direction and illumination angle of the DEM, and the relief function of the software is used to make the DEM perform relief so that the DEM can display relief of the relief map, and finally the orthophotomap of the DEM is generated and derived.
In the present exemplary embodiment, a DEM three-dimensional model may be generated from an orthographic image of the DEM using drawing software. For example, a CASS3D module using CASS software loads both DEM and an orthographic image of DEM, synthesizes DEM and the orthographic image of DEM using the vertical model generation function of the module, and derives a DEM three-dimensional model.
By analyzing the present exemplary embodiment, the present exemplary embodiment effectively identifies the relief situation of the topography under the vegetation by generating the orthophoto map of the DEM according to the illumination direction and the illumination angle of the DEM, and obtains the elevation information of the topography by generating the DEM three-dimensional model according to the orthophoto map of the DEM and the DEM, and simultaneously improves the speed of identifying the topography under the vegetation, thereby improving the drawing efficiency of the topography.
In one exemplary embodiment of the present disclosure, mapping topography from point cloud data, a point cloud three-dimensional model, and a DEM three-dimensional model includes:
and (5) performing ground object drawing according to the point cloud three-dimensional model.
And carrying out terrain drawing according to the DEM three-dimensional model.
A grid is generated from the point cloud data.
Contour lines are generated according to contour lines of the grids.
And acquiring point cloud key points of the point cloud data, and converting the point cloud key points into elevation points.
And drawing a topographic map according to the ground object, the topography, the Gao Chengdian and the elevation points.
Wherein, according to the three-dimensional model of point cloud, draw the ground object includes: converting the point cloud three-dimensional model in the OBJ format into a point cloud three-dimensional model in an OSGB format (a binary format), and performing ground feature drawing according to the point cloud three-dimensional model in the OSGB format. The step of converting the point cloud three-dimensional model in the OBJ format into the point cloud three-dimensional model in the OSGB format may be implemented by using three-dimensional model software, for example, an OSGBLab (oblique mate) is used to load the point cloud three-dimensional model in the OBJ format, model compaction is performed on the point cloud three-dimensional model in the OBJ format, and the point cloud three-dimensional model in the OSGB format is converted into the point cloud three-dimensional model in the OSGB format. The feature drawing according to the OSGB-format point cloud three-dimensional model may be implemented by using mapping software, for example, the OSGB-format point cloud three-dimensional model is loaded by using SouthMap software, and feature drawing is performed by selecting a corresponding code.
The steps of generating a grid from the point cloud data and generating contours from contours of the grid may be implemented using modeling software. For example, the 3D mesh is generated by loading point cloud data by using 3D resolution software, smoothing processing is performed on the 3D mesh, then contour lines are generated by a contour line function in the software, smoothing processing is performed, and finally contour lines are generated and derived.
The step of obtaining point cloud keypoints of the point cloud data and converting the point cloud keypoints to elevation points may be implemented using point cloud data processing analysis software and mapping software, for example, using LIDAR360 software to load the point cloud data, generate and derive the point cloud keypoints in dxf format (an open vector data format). And opening the point cloud key points in dxf format in SouthMap, and converting the point cloud key points into elevation points.
The step of mapping the terrain based on features, terrain, gao Chengdian and elevation points may be implemented using mapping software, such as SouthMap software.
As can be seen from analysis of the present exemplary embodiment, the present exemplary embodiment draws a topography map according to the point cloud data, the point cloud three-dimensional model, and the DEM three-dimensional model, and improves the efficiency of drawing the topography map on the basis of effectively identifying the topography and topography under vegetation. And the present exemplary embodiment does not require a camera, reducing the cost of drawing the topographic map.
In one exemplary embodiment of the present disclosure, mapping topography from terrain, topography, gao Chengdian, and elevation points, comprises:
topology checking is performed on the contour lines and the elevation points.
And drawing an initial topographic map according to the contour points and the elevation points after the ground feature, the topography and the topology are checked.
And carrying out finishing treatment on the initial topographic map and adding profile information to generate the topographic map.
Wherein the topology checking includes batch deletion Gao Chengdian and contour contradictory error sections, which can be implemented using mapping software, e.g., loading the contours and the high Cheng Dian using EPS software, and batch deletion of the dotted contradictory error sections using the mid-software dotted contradictory checking function. The method comprises the steps of drawing an initial topography map according to contour points and elevation points after topography, topography and topology inspection, carrying out finishing treatment on the initial topography map, adding profile information to generate the topography map, and carrying out unified loading of the contour points and the elevation points after topography, topography and topology inspection by using mapping software, for example, using SouthMap software to generate the initial topography map, carrying out finishing treatment, batch framing and profile information adding on the initial topography map, and finally obtaining the complete topography map.
In one exemplary embodiment of the present disclosure, the finishing process includes processing contradictory points of the contour and the ground feature, truncating the contour, and noting.
In the present exemplary embodiment, the finishing process may be implemented using imaging software. For example, the SouthMap software is used for loading an initial topographic map, processing contradiction points between contour lines and ground features of the initial topographic map, cutting off the contour lines according to the topographic processing principle, and marking the contour lines.
In an exemplary embodiment of the present disclosure, before generating the point cloud three-dimensional model according to the point cloud intensity of the point cloud data, the camera-free topography acquisition and rendering method further includes preprocessing the point cloud data. The preprocessing comprises ribbon splicing, ribbon adjustment, ground point classification and point cloud quality inspection.
The point cloud quality inspection comprises an air line quality inspection, an elevation quality inspection, an air band overlapping analysis and a point cloud density quality inspection. According to the method and the device for generating the point cloud three-dimensional model, the quality of the point cloud data is improved by preprocessing the point cloud data, and reliable data support is provided for the subsequent generation of the point cloud three-dimensional model and the DEM three-dimensional model.
In one exemplary embodiment of the present disclosure, acquiring point cloud data for a target work area based on a crisscross route includes:
device selection, including unmanned aerial vehicle with crisscross route planning functionality, selecting an on-board LIDAR device without camera model (LIDAR scanning device).
Unmanned aerial vehicle takes off and prepares, including debugging airborne LIDAR equipment, inspection unmanned aerial vehicle, unmanned aerial vehicle take off and land point selection, unmanned aerial vehicle route planning, setting up the area of flight overlap rate. Wherein the unmanned aerial vehicle airline is planned as a crisscross airline.
And acquiring point cloud data, wherein the unmanned aerial vehicle scans the target working area according to the planned flight of the route and the onboard LIDAR equipment to acquire the point cloud data.
In the present exemplary embodiment, the on-board LIDAR device does not need to carry a camera, reducing the cost of the on-board LIDAR (typically camera costs are around 20% of the cost of the on-board LIDAR device). In addition, the cross-shaped route is adopted for flying, so that more side face point clouds of a building or a structure can be obtained, meanwhile, as vegetation gaps are different in pointing direction, the number of point clouds of the airborne LIDAR equipment penetrating vegetation and irradiating the ground can be effectively increased by adopting the cross-shaped route, and the help is provided for obtaining more accurate topography and topography data. The flight efficiency of the unmanned aerial vehicle can be obviously improved by setting a lower route overlapping rate. For example, the course overlap rate is reduced from 60% to 30%, and the flight efficiency is increased by 100%.
Corresponding to the camera-free topographic map collecting and drawing method of the above embodiments, fig. 2 is a block diagram of a camera-free topographic map collecting and drawing device according to an embodiment of the present disclosure. For ease of illustration, only portions relevant to embodiments of the present disclosure are shown. Referring to fig. 2, the camera-free topography acquisition and rendering device 20 includes:
the point cloud data acquisition module 201 is configured to acquire point cloud data of a target operation area based on a crisscross route.
The point cloud three-dimensional model generation module 202 is configured to generate a point cloud three-dimensional model according to the point cloud intensity of the point cloud data.
The DEM three-dimensional model generating module 203 is configured to generate a digital elevation model DEM according to the point cloud data, and generate a DEM three-dimensional model according to an illumination direction and an illumination angle of the DEM.
The topographic map drawing module 204 is configured to draw a topographic map according to the point cloud data, the point cloud three-dimensional model, and the DEM three-dimensional model.
In an exemplary embodiment of the present disclosure, the point cloud three-dimensional model generation module 202 is specifically configured to:
and generating a point cloud orthographic image according to the point cloud intensity of the point cloud data.
And constructing a triangle network according to the point cloud data.
And resampling the point cloud data based on the triangular network to generate a three-dimensional model.
And performing texture mapping on the three-dimensional model according to the point cloud orthographic image to generate a point cloud three-dimensional model.
In an exemplary embodiment of the present disclosure, DEM three-dimensional model generation module 203 is specifically configured to:
and generating an orthographic image of the DEM according to the illumination direction and the illumination angle of the DEM.
And generating a DEM three-dimensional model according to the orthographic image of the DEM.
In one exemplary embodiment of the present disclosure, the topography mapping module 204 is specifically configured to:
and (5) performing ground object drawing according to the point cloud three-dimensional model.
And carrying out terrain drawing according to the DEM three-dimensional model.
A grid is generated from the point cloud data.
Contour lines are generated according to contour lines of the grids.
And acquiring point cloud key points of the point cloud data, and converting the point cloud key points into elevation points.
And drawing a topographic map according to the ground object, the topography, the Gao Chengdian and the elevation points.
In an exemplary embodiment of the present disclosure, the topography mapping module 204 is specifically further configured to:
topology checking is performed on the contour lines and the elevation points.
And drawing an initial topographic map according to the contour points and the elevation points after the ground feature, the topography and the topology are checked.
And carrying out finishing treatment on the initial topographic map and adding profile information to generate the topographic map.
In one exemplary embodiment of the present disclosure, the topography mapping module 204 is specifically further configured to perform finishing processes including processing contradictory points of contour lines and features, truncating contour lines, and noting.
In an exemplary embodiment of the present disclosure, the camera-free topography acquisition and rendering device 20 further includes a preprocessing module, where the preprocessing module is specifically configured to:
the point cloud data is preprocessed before generating a point cloud three-dimensional model from the point cloud intensities of the point cloud data. The preprocessing comprises ribbon splicing, ribbon adjustment, ground point classification and point cloud quality inspection.
Referring to fig. 3, fig. 3 is a schematic block diagram of an electronic device according to an embodiment of the disclosure. The terminal 300 in the present embodiment as shown in fig. 3 may include: one or more processors 301, one or more input devices 302, one or more output devices 303, and one or more memories 304. The processor 301, the input device 302, the output device 303, and the memory 304 communicate with each other via a communication bus 305. The memory 304 is used to store a computer program comprising program instructions. The processor 301 is configured to execute program instructions stored in the memory 304. Wherein the processor 301 is configured to invoke program instructions to perform the following functions of the modules/units in the above described device embodiments, such as the functions of the modules 201 to 204 shown in fig. 2.
It should be appreciated that in the disclosed embodiments, the processor 301 may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 302 may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of a fingerprint), a microphone, etc., and the output device 303 may include a display (LCD, etc.), a speaker, etc.
The memory 304 may include read only memory and random access memory and provides instructions and data to the processor 301. A portion of memory 304 may also include non-volatile random access memory. For example, the memory 304 may also store information of device type.
In a specific implementation, the processor 301, the input device 302, and the output device 303 described in the embodiments of the present disclosure may perform an implementation described in the embodiments of the disclosure that do not require a camera to collect and draw a topographic map, and may also perform an implementation of the terminal 300 described in the embodiments of the present disclosure, which is not described herein again.
In another embodiment of the disclosure, a computer readable storage medium is provided, where the computer readable storage medium stores a computer program, where the computer program includes program instructions, where the program instructions, when executed by a processor, implement all or part of the procedures in the method embodiments described above, or may be implemented by instructing related hardware by the computer program, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by the processor, implements the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the jurisdiction's jurisdiction and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the jurisdiction and the patent practice.
The computer readable storage medium may be an internal storage unit of the terminal of any of the foregoing embodiments, such as a hard disk or a memory of the terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal. Further, the computer-readable storage medium may also include both an internal storage unit of the terminal and an external storage device. The computer-readable storage medium is used to store a computer program and other programs and data required for the terminal. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working procedures of the terminal and the unit described above may refer to the corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In several embodiments provided in the present application, it should be understood that the disclosed terminal and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via some interfaces or units, or may be an electrical, mechanical, or other form of connection.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purposes of the embodiments of the present disclosure.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a specific embodiment of the present disclosure, but the protection scope of the present disclosure is not limited thereto, and any equivalent modifications or substitutions will be apparent to those skilled in the art within the scope of the present disclosure, and these modifications or substitutions should be covered in the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (10)
1. A topographic map acquisition and drawing method without a camera is characterized by comprising the following steps:
acquiring point cloud data of a target operation area based on a crisscross route;
generating a point cloud three-dimensional model according to the point cloud intensity of the point cloud data;
generating a Digital Elevation Model (DEM) according to the point cloud data;
generating a DEM three-dimensional model according to the illumination direction and the illumination angle of the DEM;
and drawing a topographic map according to the point cloud data, the point cloud three-dimensional model and the DEM three-dimensional model.
2. The camera-free topography acquisition and rendering method of claim 1, wherein the generating a point cloud three-dimensional model from the point cloud intensities of the point cloud data comprises:
generating a point cloud orthographic image according to the point cloud intensity of the point cloud data;
constructing a triangle network according to the point cloud data;
resampling the point cloud data based on the triangular network to generate a three-dimensional model;
and performing texture mapping on the three-dimensional model according to the point cloud orthographic image to generate the point cloud three-dimensional model.
3. The camera-free topographic map acquisition and rendering method of claim 1, wherein the generating the DEM three-dimensional model according to the illumination direction and the illumination angle of the DEM includes:
generating an orthographic image of the DEM according to the illumination direction and the illumination angle of the DEM;
and generating the DEM three-dimensional model according to the orthographic image of the DEM.
4. The camera-free topography acquisition mapping method of claim 1, wherein the mapping the topography from the point cloud data, the point cloud three-dimensional model, and the DEM three-dimensional model comprises:
performing ground object drawing according to the point cloud three-dimensional model;
performing terrain drawing according to the DEM three-dimensional model;
generating a grid according to the point cloud data;
generating contour lines according to contour lines of the grids;
acquiring point cloud key points of the point cloud data, and converting the point cloud key points into elevation points;
and drawing a topographic map according to the ground feature, the topography, the Gao Chengdian and the elevation points.
5. The camera-free topography acquisition mapping method of claim 4, wherein mapping topography from said terrain, said topography, said Gao Chengdian, and said elevation points comprises:
performing topology inspection on the contour lines and the elevation points;
drawing an initial topography according to the ground feature, the contour points and the elevation points after the topography and topology inspection;
and carrying out finishing treatment on the initial topographic map and adding profile information to generate the topographic map.
6. The camera-free topography acquisition and mapping method of claim 5, wherein said finishing process includes processing contradictory points between said contour lines and said features, truncating contour lines and noting.
7. The camera-free topography acquisition and rendering method of claim 1, wherein prior to generating a point cloud three-dimensional model from the point cloud intensities of the point cloud data, the camera-free topography acquisition and rendering method further comprises:
preprocessing the point cloud data; the preprocessing comprises ribbon splicing, ribbon adjustment, ground point classification and point cloud quality inspection.
8. A camera-free topographic map acquisition and rendering device, comprising:
the point cloud data acquisition module is used for acquiring point cloud data of a target operation area based on a crisscross route;
the point cloud three-dimensional model generation module is used for generating a point cloud three-dimensional model according to the point cloud intensity of the point cloud data;
the DEM three-dimensional model generation module is used for generating a digital elevation model DEM according to the point cloud data and generating a DEM three-dimensional model according to the illumination direction and the illumination angle of the DEM;
and the topographic map drawing module is used for drawing a topographic map according to the point cloud data, the point cloud three-dimensional model and the DEM three-dimensional model.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311420408.XA CN117456121B (en) | 2023-10-30 | 2023-10-30 | Topographic map acquisition and drawing method and device without camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311420408.XA CN117456121B (en) | 2023-10-30 | 2023-10-30 | Topographic map acquisition and drawing method and device without camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117456121A true CN117456121A (en) | 2024-01-26 |
CN117456121B CN117456121B (en) | 2024-07-12 |
Family
ID=89594382
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311420408.XA Active CN117456121B (en) | 2023-10-30 | 2023-10-30 | Topographic map acquisition and drawing method and device without camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117456121B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102881009A (en) * | 2012-08-22 | 2013-01-16 | 敦煌研究院 | Cave painting correcting and positioning method based on laser scanning |
CN110111414A (en) * | 2019-04-10 | 2019-08-09 | 北京建筑大学 | A kind of orthography generation method based on three-dimensional laser point cloud |
CN110363054A (en) * | 2018-11-16 | 2019-10-22 | 北京京东尚科信息技术有限公司 | Road marking line recognition methods, device and system |
CN113034689A (en) * | 2021-04-30 | 2021-06-25 | 睿宇时空科技(重庆)有限公司 | Laser point cloud-based terrain three-dimensional model, terrain map construction method and system, and storage medium |
CN114063616A (en) * | 2021-11-11 | 2022-02-18 | 深圳市城市公共安全技术研究院有限公司 | Method and device for planning forest area path based on three-dimensional laser scanning detection |
CN115018973A (en) * | 2022-04-13 | 2022-09-06 | 国网江苏省电力有限公司经济技术研究院 | Low-altitude unmanned-machine point cloud modeling precision target-free evaluation method |
WO2023060632A1 (en) * | 2021-10-14 | 2023-04-20 | 重庆数字城市科技有限公司 | Street view ground object multi-dimensional extraction method and system based on point cloud data |
-
2023
- 2023-10-30 CN CN202311420408.XA patent/CN117456121B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102881009A (en) * | 2012-08-22 | 2013-01-16 | 敦煌研究院 | Cave painting correcting and positioning method based on laser scanning |
CN110363054A (en) * | 2018-11-16 | 2019-10-22 | 北京京东尚科信息技术有限公司 | Road marking line recognition methods, device and system |
CN110111414A (en) * | 2019-04-10 | 2019-08-09 | 北京建筑大学 | A kind of orthography generation method based on three-dimensional laser point cloud |
CN113034689A (en) * | 2021-04-30 | 2021-06-25 | 睿宇时空科技(重庆)有限公司 | Laser point cloud-based terrain three-dimensional model, terrain map construction method and system, and storage medium |
WO2023060632A1 (en) * | 2021-10-14 | 2023-04-20 | 重庆数字城市科技有限公司 | Street view ground object multi-dimensional extraction method and system based on point cloud data |
CN114063616A (en) * | 2021-11-11 | 2022-02-18 | 深圳市城市公共安全技术研究院有限公司 | Method and device for planning forest area path based on three-dimensional laser scanning detection |
CN115018973A (en) * | 2022-04-13 | 2022-09-06 | 国网江苏省电力有限公司经济技术研究院 | Low-altitude unmanned-machine point cloud modeling precision target-free evaluation method |
Also Published As
Publication number | Publication date |
---|---|
CN117456121B (en) | 2024-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111322994B (en) | Large-scale cadastral survey method for intensive house area based on unmanned aerial vehicle oblique photography | |
CN110503080B (en) | Investigation method based on unmanned aerial vehicle oblique photography auxiliary sewage draining exit | |
CN111597666B (en) | Method for applying BIM to transformer substation construction process | |
CN111091613A (en) | Three-dimensional live-action modeling method based on unmanned aerial vehicle aerial survey | |
CN113034470B (en) | Asphalt concrete thickness nondestructive testing method based on unmanned aerial vehicle oblique photography technology | |
US20230419501A1 (en) | Image analysis for aerial images | |
CN104933223A (en) | Power transmission line channel digital surveying method | |
Sun et al. | Building displacement measurement and analysis based on UAV images | |
CN116468869A (en) | Live-action three-dimensional modeling method, equipment and medium based on remote sensing satellite image | |
CN109961510A (en) | A kind of high cutting-slope geology quick logging method based on three-dimensional point cloud reconfiguration technique | |
CN116129064A (en) | Electronic map generation method, device, equipment and storage medium | |
CN113538668A (en) | Geological model construction method, geological model construction system, storage medium and electronic equipment | |
CN112902928A (en) | Unmanned aerial vehicle aerial photography measurement method and system thereof | |
CN117572455B (en) | Mountain reservoir topographic map mapping method based on data fusion | |
CN111006645A (en) | Unmanned aerial vehicle surveying and mapping method based on motion and structure reconstruction | |
CN112184903B (en) | Method, device, equipment and medium for detecting high-voltage line tree obstacle risk points | |
CN116433865B (en) | Space-ground collaborative acquisition path planning method based on scene reconstructability analysis | |
Yijing et al. | Construction and analysis of 3D scene model of landscape space based on UAV oblique photography and 3D laser scanner | |
CN110580468B (en) | Single wood structure parameter extraction method based on image matching point cloud | |
CN117456121B (en) | Topographic map acquisition and drawing method and device without camera | |
Chen et al. | A non-contact measurement method for rock mass discontinuity orientations by smartphone | |
Gu et al. | Surveying and mapping of large-scale 3D digital topographic map based on oblique photography technology | |
CN115601517A (en) | Rock mass structural plane information acquisition method and device, electronic equipment and storage medium | |
CN111832635B (en) | Matching method and device for ground-based SAR image and laser point cloud topographic data | |
CN113418510A (en) | High-standard farmland acceptance method based on multi-rotor unmanned aerial vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |