CN115861561B - Contour line generation method and device based on semantic constraint - Google Patents

Contour line generation method and device based on semantic constraint Download PDF

Info

Publication number
CN115861561B
CN115861561B CN202310160119.4A CN202310160119A CN115861561B CN 115861561 B CN115861561 B CN 115861561B CN 202310160119 A CN202310160119 A CN 202310160119A CN 115861561 B CN115861561 B CN 115861561B
Authority
CN
China
Prior art keywords
point cloud
cloud data
processed
region
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310160119.4A
Other languages
Chinese (zh)
Other versions
CN115861561A (en
Inventor
王涛
王宇翔
刘会安
杨娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Hongtu Information Technology Co Ltd
Original Assignee
Aerospace Hongtu Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Hongtu Information Technology Co Ltd filed Critical Aerospace Hongtu Information Technology Co Ltd
Priority to CN202310160119.4A priority Critical patent/CN115861561B/en
Publication of CN115861561A publication Critical patent/CN115861561A/en
Application granted granted Critical
Publication of CN115861561B publication Critical patent/CN115861561B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a contour line generation method and a contour line generation device based on semantic constraint, which relate to the technical field of geographic mapping and comprise the following steps: acquiring an image sequence of a region to be processed; generating point cloud data of the region to be processed based on the image sequence of the region to be processed, wherein the point cloud data carries semantic information; based on semantic information and point cloud data, DEM data of the area to be processed are generated; and extracting the contour line of the region to be processed by using the DEM data of the region to be processed, thereby solving the technical problems of lower generation efficiency and lower precision of the existing contour line generation method.

Description

Contour line generation method and device based on semantic constraint
Technical Field
The invention relates to the technical field of geographical mapping, in particular to a contour line generation method and device based on semantic constraint.
Background
The contour lines are important element contents in the digital line drawing, can reflect fluctuation and change of topography and landform, are widely applied to various industries, and are indispensable basic geographic information elements. At present, the domestic mode of producing the DEM and the contour line is based on the traditional three-dimensional acquisition method, the contour line, the elevation point and the characteristic line are acquired first to generate the DEM data, and the contour line is generated by automatically acquiring the DEM data.
The traditional stereoscopic acquisition method is required to be equipped with professional three-dimensional glasses and input equipment during acquisition, almost depends on manual visual judgment and manual drawing in stereoscopic vision environment, is large in workload and long in time consumption, does not consider the problem of severe change of topography and topography based on the generation of DEM data, is poor in presentation effect, and is low in precision of generating point cloud data, DEM and the like.
An effective solution to the above-mentioned problems has not been proposed yet.
Disclosure of Invention
In view of the above, the invention aims to provide a contour line generation method and a contour line generation device based on semantic constraint, so as to solve the technical problems of low generation efficiency and low precision of the existing contour line generation method.
In a first aspect, an embodiment of the present invention provides a contour line generating method based on semantic constraint, including: acquiring an image sequence of a region to be processed; generating point cloud data of the region to be processed based on the image sequence of the region to be processed, wherein the point cloud data carries semantic information; generating DEM data of the area to be processed based on the semantic information and the point cloud data; and extracting the contour line of the region to be processed by using the DEM data of the region to be processed.
Further, generating point cloud data of the region to be processed based on the image sequence of the region to be processed includes: calculating each image feature sub-in the image sequence by using a SIFT operator; constructing initial point cloud data of the region to be processed based on the feature and the image sequence of the region to be processed; dividing each image in the image sequence by using a Segformer multi-semantic division model, and determining semantic information of each pixel point in each image; and generating point cloud data of the region to be processed based on the initial point cloud data of the region to be processed and semantic information of each pixel point in each image.
Further, constructing initial point cloud data of the region to be processed based on the feature and the image sequence of the region to be processed, including: determining feature points matched with the feature sub-in the image sequence of the region to be processed based on a cascading hash algorithm; and carrying out space front intersection calculation on the characteristic points based on target information of cameras corresponding to the images, and determining three-dimensional coordinate information of the characteristic points, wherein the target information comprises: position information and attitude information; optimizing the attitude information and the three-dimensional coordinate information by utilizing target data and a beam adjustment algorithm to obtain sparse point cloud data of the region to be processed; and processing the sparse point cloud data by using a PMVS algorithm to obtain the initial point cloud data.
Further, generating the point cloud data of the area to be processed based on the initial point cloud data of the area to be processed and the semantic information of each pixel point in each image, including: performing triangulation processing on the initial point cloud data to obtain the triangulated initial point cloud data; based on a photogrammetry collineation algorithm, determining two-dimensional coordinate information of each point cloud data in the triangulated initial point cloud data; determining semantic information of each point cloud data in the triangulated initial point cloud data based on two-dimensional coordinate information of each point cloud data in the triangulated initial point cloud data and semantic information of each pixel point in each image; and determining semantic information of each point cloud data in the triangulated initial point cloud data as a semantic tag to be configured to the corresponding triangulated initial point cloud data, so as to obtain the point cloud data of the area to be processed.
Further, based on that the point cloud data carries semantic information, converting the point cloud data of the area to be processed into DEM data of the area to be processed, including: denoising the point cloud data by using a mean value algorithm to obtain first point cloud data; performing classification processing and target processing on the first point cloud data to obtain second point cloud data, wherein the classification processing comprises: the method comprises the steps of automatic point cloud classification processing and man-machine interaction classification processing, wherein the target processing comprises the following steps: adding fracture lines, leveling operation and point compensation; and constructing DEM data of the area to be processed based on the semantic information carried by the point cloud data and the ground points in the second point cloud data.
Further, the method further comprises: and performing achievement checking on the DEM data of the area to be processed, wherein the achievement checking comprises the following steps: mathematical basic inspection, grid spacing inspection, DEM range, namely start-stop point coordinate accuracy inspection, data integrity and obvious elevation abnormality inspection, data surface form accuracy inspection, edge connection inspection and DEM elevation error accuracy inspection.
Further, extracting contour lines of the region to be processed by using DEM data of the region to be processed includes: extracting contour lines of various terrain categories in the region to be processed based on the terrain category of the region to be processed and DEM data of the region to be processed; and correcting and checking the contour lines of the terrain categories in the area to be treated, so as to obtain the contour lines of the area to be treated.
In a second aspect, an embodiment of the present invention further provides a contour line generating device based on semantic constraint, including: an acquisition unit for acquiring an image sequence of a region to be processed; the point cloud generation unit is used for generating point cloud data of the area to be processed based on the image sequence of the area to be processed, wherein the point cloud data carries semantic information; the DEM generation unit is used for generating DEM data of the area to be processed based on the semantic information and the point cloud data; and the contour extraction unit is used for extracting the contour of the region to be processed by using the DEM data of the region to be processed.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a memory and a processor, where the memory is configured to store a program for supporting the processor to execute the method described in the first aspect, and the processor is configured to execute the program stored in the memory.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon.
In the embodiment of the invention, an image sequence of a region to be processed is acquired; generating point cloud data of the region to be processed based on the image sequence of the region to be processed, wherein the point cloud data carries semantic information; generating DEM data of the area to be processed based on the semantic information and the point cloud data; and extracting the contour line of the region to be processed by using the DEM data of the region to be processed, so that the purpose of adding semantic constraint in the contour line generation process is achieved, and further the technical problem that the generation efficiency and precision of the existing contour line generation method are low is solved, and the technical effect of improving the generation efficiency and precision of the contour line generation method is realized.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a contour line generation method based on semantic constraints provided by an embodiment of the invention;
FIG. 2 is a schematic diagram of a contour line generating device based on semantic constraint according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Embodiment one:
in accordance with an embodiment of the present invention, there is provided an embodiment of a semantic constraint-based contour generation method, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system, such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
FIG. 1 is a flow chart of a contour generation method based on semantic constraints according to an embodiment of the present invention, as shown in FIG. 1, the method comprising the steps of:
step S102, acquiring an image sequence of a region to be processed;
it should be noted that, the image sequence is obtained by means of an unmanned plane or a laser radar.
Step S104, generating point cloud data of the region to be processed based on the image sequence of the region to be processed, wherein the point cloud data carries semantic information;
step S106, based on the semantic information and the point cloud data, DEM data of the area to be processed are generated;
step S108, extracting contour lines of the area to be processed by using the DEM data of the area to be processed.
In the embodiment of the invention, an image sequence of a region to be processed is acquired; generating point cloud data of the region to be processed based on the image sequence of the region to be processed, wherein the point cloud data carries semantic information; generating DEM data of the area to be processed based on the semantic information and the point cloud data; and extracting the contour line of the region to be processed by using the DEM data of the region to be processed, so that the purpose of adding semantic constraint in the contour line generation process is achieved, and further the technical problem that the generation efficiency and precision of the existing contour line generation method are low is solved, and the technical effect of improving the generation efficiency and precision of the contour line generation method is realized.
In the embodiment of the present invention, step S104 includes the following steps:
step S201, calculating each image feature sub-in the image sequence by using a SIFT operator;
step S202, constructing initial point cloud data of the region to be processed based on the feature and the image sequence of the region to be processed;
step S203, segmenting each image in the image sequence by using a Segfomer multi-semantic segmentation model, and determining semantic information of each pixel point in each image;
step S204, generating point cloud data of the to-be-processed area based on the initial point cloud data of the to-be-processed area and semantic information of each pixel point in each image.
Specifically, in the embodiment of the present invention, step S202 includes the following steps:
determining feature points matched with the feature sub-in the image sequence of the region to be processed based on a cascading hash algorithm;
and carrying out space front intersection calculation on the characteristic points based on target information of cameras corresponding to the images, and determining three-dimensional coordinate information of the characteristic points, wherein the target information comprises: position information and attitude information;
optimizing the attitude information and the three-dimensional coordinate information by utilizing target data and a beam adjustment algorithm to obtain sparse point cloud data of the region to be processed;
and processing the sparse point cloud data by using a PMVS algorithm to obtain the initial point cloud data.
Step S204 includes the steps of:
performing triangulation processing on the initial point cloud data to obtain the triangulated initial point cloud data;
based on a photogrammetry collineation algorithm, determining two-dimensional coordinate information of each point cloud data in the triangulated initial point cloud data;
determining semantic information of each point cloud data in the triangulated initial point cloud data based on two-dimensional coordinate information of each point cloud data in the triangulated initial point cloud data and semantic information of each pixel point in each image;
and determining semantic information of each point cloud data in the triangulated initial point cloud data as a semantic tag to be configured to the corresponding triangulated initial point cloud data, so as to obtain the point cloud data of the area to be processed.
In the embodiment of the invention, firstly, the feature of each image in the image sequence is calculated by using a SIFT operator and stored.
And (3) assigning a unique code to each feature by adopting a cascading hash algorithm, matching each image in the image sequence with the feature, determining feature points corresponding to the feature points, and establishing homonymous point relations among the feature points on different images.
And taking the low-precision camera position and posture information of each image as initial values, and obtaining a group of three-dimensional coordinates of the feature points corresponding to the same name points through space front intersection calculation by using a collinear equation of photogrammetry.
The camera position and posture, the feature point two-dimensional coordinates and the feature point three-dimensional coordinates are improved by the adjustment of a light beam method based on the least square principle, so that camera posture information (described by a rotation matrix) with higher precision of each image and the feature point three-dimensional coordinates are obtained, and a three-dimensional point cloud (namely, sparse point cloud data) is formed.
By using a PMVS method, under the constraint of local photometric consistency and global visibility, the sparse point cloud data is subjected to matching, diffusion and filtering to generate dense three-dimensional point cloud (namely, initial point cloud data) with real colors.
And dividing the image based on the multi-semantic division model of the segfuel to obtain semantic information of each pixel point of each image in the image sequence.
And finally, triangulating the initial point cloud data, and calculating corresponding two-dimensional coordinates of each three-dimensional point on the related images according to the collineation principle of photogrammetry, so that semantic tags of the two-dimensional points are obtained, are distributed to the three-dimensional points after disambiguation treatment, so that each three-dimensional point has own semantic tag, and dense three-dimensional point cloud with semantic information (namely, point cloud data of a region to be processed) is obtained.
In the embodiment of the present invention, step S106 includes the following steps:
denoising the point cloud data by using a mean value algorithm to obtain first point cloud data;
performing classification processing and target processing on the first point cloud data to obtain second point cloud data, wherein the classification processing comprises: the method comprises the steps of automatic point cloud classification processing and man-machine interaction classification processing, wherein the target processing comprises the following steps: adding fracture lines, leveling operation and point compensation;
and constructing DEM data of the area to be processed based on the semantic information carried by the point cloud data and the ground points in the second point cloud data.
Performing achievement checking on the DEM data of the area to be processed, wherein the achievement checking comprises the following steps: mathematical basic inspection, grid spacing inspection, DEM range, namely start-stop point coordinate accuracy inspection, data integrity and obvious elevation abnormality inspection, data surface form accuracy inspection, edge connection inspection and DEM elevation error accuracy inspection.
In the embodiment of the invention, because obvious error points exist when the point cloud data of the area to be processed has water surface reflection and building mirror reflection on the image, the point cloud is processed by adopting a mean value filtering method to achieve the purpose of removing noise by considering the continuity of the point expressing the topography and the adjacent point on the height change.
Then, according to the actual situation, selecting the special processing of the point cloud data of roads, rivers, buildings and the like with accurate information matching, and automatically classifying the point cloud by any algorithm can have the phenomenon of correct classification of less classification, wrong classification or incomplete classification, and further needs manual intervention to achieve the aim of correct classification of the point cloud. For the automatic classification of the early-stage point cloud, which is misclassified, little-classified or indistinguishable, the error must be manually modified correctly by adopting a manual interaction mode on the basis of the automatic classification. And (3) calling automatic classification results of the point cloud, generating a surface triangular net, checking elevation information of the point cloud in the form of a triangular net pull section of different areas, and checking whether the areas have discontinuous, uneven and loophole ground features or terrain or the situation that the same ground features have large elevation difference (namely obvious pit points or protruding points), so as to judge whether the point cloud has wrong points or little points. In order to examine the point cloud misdistribution phenomenon of the surface detail part more carefully, the synchronous DOM or DLG data and the point cloud triangle network of the same area are added to carry out terrain in the working process.
And (3) comparing and assisting in judgment, identifying different terrains and ground objects piece by piece, and locally separating the misplaced or undershot points into corresponding point cloud categories by adopting an iterative algorithm according to the correct ground object types by using a point cloud manual classification tool so as to improve the accuracy of the final manual classification result.
Then, by adding fracture lines and carrying out leveling operation and point filling on rivers, ditches, lakes, reservoirs, ponds and the like with water according to water levels, DEM data of an area to be treated are obtained, the problem that point clouds cannot show real topography is solved, and therefore accuracy of the DEM is guaranteed.
Finally, carrying out mathematical basic inspection, grid spacing inspection, DEM range, namely start and stop point coordinate correctness inspection, data integrity and obvious elevation abnormality inspection, data surface morphology correctness inspection, edge splicing inspection, DEM Gao Chengzhong error accuracy inspection and other result inspection on the DEM data of the area to be processed.
In the embodiment of the present invention, step S106 includes the following steps:
extracting contour lines of various terrain categories in the region to be processed based on the terrain category of the region to be processed and DEM data of the region to be processed;
and correcting and checking the contour lines of the terrain categories in the area to be treated, so as to obtain the contour lines of the area to be treated.
In the embodiment of the invention, the contour line is automatically extracted and divided into a plurality of blocks according to semantic information to automatically generate, such as residential areas, water systems, woodlands and the like.
According to different terrain categories, different parameters are configured, such as optimization of minimum area parameters to avoid contour funnel ring phenomenon caused by local fluctuation of flat ground or fluctuation of house elevation, the folding angle can effectively reduce fluctuation trend of contour chamfer, convex angle and the like which are obviously different from actual caused by similar elevation, the smooth coefficient can effectively smooth contour folding angle, self-intersection and the like, and therefore data requirements of contour lines in different terrain categories are met.
The editing tools such as local concatenation, shaping, breaking and the like are utilized to repair and edit the unreasonable expression of the contour lines, the contradiction of the dotted lines, the relationship between the contour lines and the ground features and the like. The checking content of the contour line comprises elevation accuracy checking, contour line coding accuracy checking, graphic accuracy checking, topology checking (such as pseudo nodes, discounts, hanging and the like) and point-line contradiction checking, and the contour line of the area to be processed is obtained.
According to the embodiment of the invention, the image sequence is acquired by an unmanned plane or a laser radar mode, semantic information is added during feature matching based on deep learning, semantic constraint is added in a beam method adjustment, the precision of generating point cloud and DEM is further improved, the terrain category (land, hilly land, mountain land and the like) is firstly judged when the DEM is generated, the contour is generated by adjusting parameters of the generated contour, then the generated contour is edited, and the precision of contour generation is improved, so that the production efficiency is improved.
Embodiment two:
the embodiment of the invention also provides a contour line generating device based on semantic constraint, which is used for executing the method provided by the embodiment of the invention, and the following is a specific introduction of the device provided by the embodiment of the invention.
As shown in fig. 2, fig. 2 is a schematic diagram of the above-mentioned contour generating apparatus based on semantic constraint, and the contour generating apparatus based on semantic constraint includes:
an acquisition unit 10 for acquiring an image sequence of a region to be processed;
the point cloud generating unit 20 is configured to generate point cloud data of the area to be processed based on the image sequence of the area to be processed, where the point cloud data carries semantic information;
a DEM generating unit 30, configured to generate DEM data of the area to be processed based on the semantic information and the point cloud data;
and a contour extraction unit 40, configured to extract a contour of the region to be processed using DEM data of the region to be processed.
In the embodiment of the invention, an image sequence of a region to be processed is acquired; generating point cloud data of the region to be processed based on the image sequence of the region to be processed, wherein the point cloud data carries semantic information; generating DEM data of the area to be processed based on the semantic information and the point cloud data; and extracting the contour line of the region to be processed by using the DEM data of the region to be processed, so that the purpose of adding semantic constraint in the contour line generation process is achieved, and further the technical problem that the generation efficiency and precision of the existing contour line generation method are low is solved, and the technical effect of improving the generation efficiency and precision of the contour line generation method is realized.
Embodiment III:
an embodiment of the present invention further provides an electronic device, including a memory and a processor, where the memory is configured to store a program that supports the processor to execute the method described in the first embodiment, and the processor is configured to execute the program stored in the memory.
Referring to fig. 3, an embodiment of the present invention further provides an electronic device 100, including: a processor 50, a memory 51, a bus 52 and a communication interface 53, the processor 50, the communication interface 53 and the memory 51 being connected by the bus 52; the processor 50 is arranged to execute executable modules, such as computer programs, stored in the memory 51.
The memory 51 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and at least one other network element is achieved via at least one communication interface 53 (which may be wired or wireless), and the internet, wide area network, local network, metropolitan area network, etc. may be used.
Bus 52 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 3, but not only one bus or type of bus.
The memory 51 is configured to store a program, and the processor 50 executes the program after receiving an execution instruction, and the method executed by the apparatus for flow defining disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 50 or implemented by the processor 50.
The processor 50 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware in the processor 50 or by instructions in the form of software. The processor 50 may be a general-purpose processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a digital signal processor (Digital Signal Processing, DSP for short), application specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA for short), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory 51 and the processor 50 reads the information in the memory 51 and in combination with its hardware performs the steps of the above method.
Embodiment four:
the embodiment of the invention also provides a computer readable storage medium, and a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the steps of the method in the first embodiment are executed.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A contour line generation method based on semantic constraints, comprising:
acquiring an image sequence of a region to be processed;
generating point cloud data of the region to be processed based on the image sequence of the region to be processed, wherein the point cloud data carries semantic information;
generating DEM data of the area to be processed based on the semantic information and the point cloud data;
extracting contour lines of the region to be processed by using DEM data of the region to be processed;
the generating the point cloud data of the region to be processed based on the image sequence of the region to be processed comprises the following steps:
calculating each image feature sub-in the image sequence by using a SIFT operator;
constructing initial point cloud data of the region to be processed based on the feature and the image sequence of the region to be processed;
dividing each image in the image sequence by using a Segformer multi-semantic division model, and determining semantic information of each pixel point in each image;
generating point cloud data of the area to be processed based on the initial point cloud data of the area to be processed and semantic information of each pixel point in each image;
the method for constructing initial point cloud data of the region to be processed based on the feature and the image sequence of the region to be processed comprises the following steps:
determining feature points matched with the feature sub-in the image sequence of the region to be processed based on a cascading hash algorithm;
and carrying out space front intersection calculation on the characteristic points based on target information of cameras corresponding to the images, and determining three-dimensional coordinate information of the characteristic points, wherein the target information comprises: position information and attitude information;
optimizing the attitude information and the three-dimensional coordinate information by utilizing the target information and a beam adjustment algorithm to obtain sparse point cloud data of the region to be processed;
and processing the sparse point cloud data by using a PMVS algorithm to obtain the initial point cloud data.
2. The method of claim 1, wherein generating the point cloud data of the region to be processed based on the initial point cloud data of the region to be processed and the semantic information of the respective pixels in each image comprises:
performing triangulation processing on the initial point cloud data to obtain the triangulated initial point cloud data;
based on a photogrammetry collineation algorithm, determining two-dimensional coordinate information of each point cloud data in the triangulated initial point cloud data;
determining semantic information of each point cloud data in the triangulated initial point cloud data based on two-dimensional coordinate information of each point cloud data in the triangulated initial point cloud data and semantic information of each pixel point in each image;
and determining semantic information of each point cloud data in the triangulated initial point cloud data as a semantic tag to be configured to the corresponding triangulated initial point cloud data, so as to obtain the point cloud data of the area to be processed.
3. The method of claim 1, wherein converting the point cloud data of the area to be processed into DEM data of the area to be processed based on the point cloud data carrying semantic information, comprises:
denoising the point cloud data by using a mean value algorithm to obtain first point cloud data;
performing classification processing and target processing on the first point cloud data to obtain second point cloud data, wherein the classification processing comprises: the method comprises the steps of automatic point cloud classification processing and man-machine interaction classification processing, wherein the target processing comprises the following steps: adding fracture lines, leveling operation and point compensation;
and constructing DEM data of the area to be processed based on the semantic information carried by the point cloud data and the ground points in the second point cloud data.
4. A method according to claim 3, characterized in that the method further comprises:
performing achievement checking on the DEM data of the area to be processed, wherein the achievement checking comprises the following steps: mathematical basic inspection, grid spacing inspection, DEM range, namely start-stop point coordinate accuracy inspection, data integrity and obvious elevation abnormality inspection, data surface form accuracy inspection, edge connection inspection and DEM elevation error accuracy inspection.
5. The method of claim 1, wherein extracting contours of the region to be processed using DEM data of the region to be processed, comprises:
extracting contour lines of various terrain categories in the region to be processed based on the terrain category of the region to be processed and DEM data of the region to be processed;
and correcting and checking the contour lines of the terrain categories in the area to be treated, so as to obtain the contour lines of the area to be treated.
6. A semantic constraint-based contour generation apparatus, comprising:
an acquisition unit for acquiring an image sequence of a region to be processed;
the point cloud generation unit is used for generating point cloud data of the area to be processed based on the image sequence of the area to be processed, wherein the point cloud data carries semantic information;
the DEM generation unit is used for generating DEM data of the area to be processed based on the semantic information and the point cloud data;
the contour extraction unit is used for extracting the contour of the region to be processed by using the DEM data of the region to be processed;
the generating the point cloud data of the region to be processed based on the image sequence of the region to be processed comprises the following steps:
calculating each image feature sub-in the image sequence by using a SIFT operator;
constructing initial point cloud data of the region to be processed based on the feature and the image sequence of the region to be processed;
dividing each image in the image sequence by using a Segformer multi-semantic division model, and determining semantic information of each pixel point in each image;
generating point cloud data of the area to be processed based on the initial point cloud data of the area to be processed and semantic information of each pixel point in each image;
the method for constructing initial point cloud data of the region to be processed based on the feature and the image sequence of the region to be processed comprises the following steps:
determining feature points matched with the feature sub-in the image sequence of the region to be processed based on a cascading hash algorithm;
and carrying out space front intersection calculation on the characteristic points based on target information of cameras corresponding to the images, and determining three-dimensional coordinate information of the characteristic points, wherein the target information comprises: position information and attitude information;
optimizing the attitude information and the three-dimensional coordinate information by utilizing the target information and a beam adjustment algorithm to obtain sparse point cloud data of the region to be processed;
and processing the sparse point cloud data by using a PMVS algorithm to obtain the initial point cloud data.
7. An electronic device comprising a memory for storing a program supporting the processor to perform the method of any one of claims 1 to 5, and a processor configured to execute the program stored in the memory.
8. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, performs the steps of the method according to any of the preceding claims 1 to 5.
CN202310160119.4A 2023-02-24 2023-02-24 Contour line generation method and device based on semantic constraint Active CN115861561B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310160119.4A CN115861561B (en) 2023-02-24 2023-02-24 Contour line generation method and device based on semantic constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310160119.4A CN115861561B (en) 2023-02-24 2023-02-24 Contour line generation method and device based on semantic constraint

Publications (2)

Publication Number Publication Date
CN115861561A CN115861561A (en) 2023-03-28
CN115861561B true CN115861561B (en) 2023-05-30

Family

ID=85658792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310160119.4A Active CN115861561B (en) 2023-02-24 2023-02-24 Contour line generation method and device based on semantic constraint

Country Status (1)

Country Link
CN (1) CN115861561B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108363983A (en) * 2018-03-06 2018-08-03 河南理工大学 A kind of Urban vegetation classification method based on unmanned plane image Yu reconstruction point cloud

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110799983A (en) * 2018-11-22 2020-02-14 深圳市大疆创新科技有限公司 Map generation method, map generation equipment, aircraft and storage medium
CN112069856B (en) * 2019-06-10 2024-06-14 商汤集团有限公司 Map generation method, driving control device, electronic equipment and system
CN112101066B (en) * 2019-06-17 2024-03-08 商汤集团有限公司 Target detection method and device, intelligent driving method and device and storage medium
CN110806175B (en) * 2019-11-20 2021-04-30 中国有色金属长沙勘察设计研究院有限公司 Dry beach monitoring method based on three-dimensional laser scanning technology
CN114842438B (en) * 2022-05-26 2024-06-14 重庆长安汽车股份有限公司 Terrain detection method, system and readable storage medium for automatic driving automobile

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108363983A (en) * 2018-03-06 2018-08-03 河南理工大学 A kind of Urban vegetation classification method based on unmanned plane image Yu reconstruction point cloud

Also Published As

Publication number Publication date
CN115861561A (en) 2023-03-28

Similar Documents

Publication Publication Date Title
US9189862B2 (en) Outline approximation for point cloud of building
CN113570665B (en) Road edge extraction method and device and electronic equipment
JP2008298631A (en) Map change detection device and method, and program
Zhang et al. Lidar-guided stereo matching with a spatial consistency constraint
CN114332134A (en) Building facade extraction method and device based on dense point cloud
CN115082699A (en) Contour shape extraction method and device, electronic equipment and storage medium
CN115641415A (en) Method, device, equipment and medium for generating three-dimensional scene based on satellite image
CN116721230A (en) Method, device, equipment and storage medium for constructing three-dimensional live-action model
Javadnejad et al. An assessment of UAS-based photogrammetry for civil integrated management (CIM) modeling of pipes
CN105184854A (en) Quick modeling method for cloud achievement data of underground space scanning point
CN111982077B (en) Electronic map drawing method and system and electronic equipment
CN111260714B (en) Flood disaster recovery assessment method, device and equipment and computer storage medium
CN111476308B (en) Remote sensing image classification method and device based on priori geometric constraint and electronic equipment
Rau A line-based 3D roof model reconstruction algorithm: Tin-merging and reshaping (TMR)
CN115861561B (en) Contour line generation method and device based on semantic constraint
Zhu et al. Triangulation of well-defined points as a constraint for reliable image matching
CN116051980B (en) Building identification method, system, electronic equipment and medium based on oblique photography
CN111583406A (en) Pole tower foot base point coordinate calculation method and device and terminal equipment
CN114972672B (en) Method, device, equipment and storage medium for constructing live-action three-dimensional model of power transmission line
CN116543109A (en) Hole filling method and system in three-dimensional reconstruction
Widyaningrum et al. Extraction of building roof edges from LiDAR data to optimize the digital surface model for true orthophoto generation
CN114444185A (en) In-situ labeling identification method and device and electronic equipment
CN114494625A (en) High-precision topographic map manufacturing method and device and computer equipment
Yu et al. Mid-and Short-Term Monitoring of Sea Cliff Erosion based on Structure-from-Motion (SfM) Photogrammetry: Application of Two Differing Camera Systems for 3D Point Cloud Construction
CN115858519B (en) DEM leveling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant