CN116147567B - Homeland mapping method based on multi-metadata fusion - Google Patents

Homeland mapping method based on multi-metadata fusion Download PDF

Info

Publication number
CN116147567B
CN116147567B CN202310426537.3A CN202310426537A CN116147567B CN 116147567 B CN116147567 B CN 116147567B CN 202310426537 A CN202310426537 A CN 202310426537A CN 116147567 B CN116147567 B CN 116147567B
Authority
CN
China
Prior art keywords
edge line
mapping
data
area
captured image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310426537.3A
Other languages
Chinese (zh)
Other versions
CN116147567A (en
Inventor
任红国
华凯
房璐璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gaotang County Space Survey And Planning Co ltd
Original Assignee
Gaotang County Space Survey And Planning Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gaotang County Space Survey And Planning Co ltd filed Critical Gaotang County Space Survey And Planning Co ltd
Priority to CN202310426537.3A priority Critical patent/CN116147567B/en
Publication of CN116147567A publication Critical patent/CN116147567A/en
Application granted granted Critical
Publication of CN116147567B publication Critical patent/CN116147567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/28Measuring arrangements characterised by the use of optical techniques for measuring areas
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B15/00Measuring arrangements characterised by the use of electromagnetic waves or particle radiation, e.g. by the use of microwaves, X-rays, gamma rays or electrons
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B21/00Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
    • G01B21/28Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring areas
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • G01S19/14Receivers specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Geometry (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Probability & Statistics with Applications (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Electromagnetism (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a homeland surveying and mapping method based on multi-element data fusion, which relates to the technical field of homeland surveying and mapping, wherein a plurality of groups of surveying and mapping data can be obtained by respectively measuring according to a geodetic method, an aerial photogrammetry method and a map programming measurement method, the final homeland surveying and mapping data can be more accurate through later data operation, the electromagnetic wave is used for carrying out area detection in a surveying and mapping area, the electromagnetic wave ranging method is mainly used for carrying out distance surveying and mapping, the distance is subjected to digital conversion through a distance conversion module, the fusion average values of a plurality of groups are calculated again, the homeland area average value closest to the accurate area is finally obtained, the average value is calculated again for three groups of fusion average value data, the final specific data of homeland surveying and mapping in a selected area is finally obtained, and the surveying and mapping data are fused and calculated in batches.

Description

Homeland mapping method based on multi-metadata fusion
Technical Field
The invention relates to the technical field of mapping, in particular to a homeland mapping method based on multi-element data fusion.
Background
The land mapping is to obtain the existing characteristic points and boundary lines of the ground and obtain the graph and position information reflecting the ground current situation by measuring means. The following problems also exist in the existing homeland mapping:
1. when the land area is mapped, the mapping method is not strict, and when the equipment is measuring, the obstacles in the measuring area are not discharged, so that the data is inaccurate.
2. After measurement and acquisition, the mapped data are directly adopted, no more optimal calculation is carried out on the data, and no further calculation fusion is carried out on the mapped land area data.
Disclosure of Invention
The invention aims to provide a homeland mapping method based on multi-element data fusion, which is characterized in that a plurality of groups of fusion average values are calculated again to finally obtain a homeland area average value closest to an accurate area, the average value is calculated again on three groups of fusion average value data, finally, the final concrete data of homeland mapping of a selected area is obtained, and the mapping data are fused and calculated in batches, so that the obtained final data are more accurate, and the problems in the prior art can be solved.
In order to achieve the above purpose, the present invention provides the following technical solutions:
A homeland mapping method based on multi-metadata fusion comprises the following steps:
s1, measuring instruction: sending an instruction command, positioning geographic coordinates of longitude and latitude of a mapping area by using a GPS, measuring the designated area of the area in a target by three different measuring methods, wherein the measuring times of each of the three methods are not less than three times, and checking obstacles in the mapping area during measurement;
three different measurement methods are used among them:
measuring according to a geodetic method, an aerial photogrammetry and a map making measurement method, measuring the land area of the area by determining the ground point position, the shape and the size of the area and the gravity field of the area, and mapping the longitude and latitude, the ground feature and the landform of the mapping area by a broken point mapping method and a large scale mapping method;
wherein, GPS still includes when location survey and drawing: whether the survey area is provided with an obstacle or not is checked through a measuring instrument, an image pickup device is arranged on the measuring instrument, an image capturing can be carried out on a picture in front of the image pickup device, a judgment result obtained by judging whether the obstacle exists in a captured image is marked in the captured image, a first captured image is obtained, the first captured image is transmitted to a terminal through 4G/5G, and the measuring instrument further comprises: the infrared thermal imager converts invisible infrared energy emitted by an object into a visible thermal image, and then performs edge capturing on the thermal image to obtain a second captured image, and whether the mapping area is provided with obstacles or not is checked based on the first captured image and the second captured image; wherein capturing an image of a screen in front of an image pickup device and marking a determination result obtained by determining whether an obstacle exists in the captured image to obtain a first captured image, comprising: capturing images of pictures in front of the image pickup device to obtain an original image;
Acquiring a preset number of historical original images without barriers in front of the original images, and determining all edge lines in the historical original images based on an edge detection algorithm; determining the tangential direction and curvature of each point on the edge line on the corresponding edge line, taking the tangential direction as a vector direction, taking the curvature as a module of the vector, and determining a first shape representation vector of the corresponding edge line on the corresponding point;
determining a region to be identified in the historical original image based on the edge line, determining the center coordinate of the region to be identified, and taking a vector pointing to a corresponding point on the edge line from the center coordinate as a second shape representation vector of the corresponding point; determining a final shape characterization vector of the corresponding edge line at the corresponding point based on the first characterization vector and the second characterization vector; summarizing the final shape representation vectors of all points on the corresponding edge lines to obtain a shape representation vector set, taking the shape representation vector set corresponding to each edge line in the historical original image as first matching data, and taking the central coordinates of the to-be-identified area corresponding to the corresponding edge line as second matching data; based on the matching data of the edge lines, the edge lines in different historical original images are matched, and an edge line set with successful matching is determined; taking the average value of the matching data of all edge lines in the successfully matched edge line set as the standard matching data of the corresponding edge line set; determining all target edge lines in the original image and target matching data of the target edge lines, and matching the target matching data with standard matching data to obtain a target matching result;
Judging whether an unmatched target edge line exists in the original image or not based on a target matching result, if so, determining the position coordinate of the target edge line in the original image, judging whether the target edge line is an obstacle based on the position coordinate, if so, taking the obstacle existing in the original image as a corresponding judging result, marking the target edge line judged as the obstacle in the original image to obtain a first captured image, otherwise, taking the obstacle not existing in the original image as a corresponding judging result;
when the unmatched target edge line does not exist in the original image, taking the fact that no obstacle exists in the original image as a corresponding judgment result; the matching data comprises first matching data and second matching data;
the method for checking whether the obstacle exists in the mapping area based on the first captured image and the second captured image comprises the following steps: marking edge lines contained in the first captured image on the second captured image, obtaining first edge lines in the second captured image, and taking the edge lines determined after edge capturing as second edge lines;
determining a first region in a second captured image of the first edge line and a second region in the second captured image of the second edge line, determining the region overlapping degree between each first region and each second region, and combining the first region and the second region with the region overlapping degree not smaller than an overlapping degree threshold value as regions to be matched; determining the matching degree of the first edge line and the second edge line in the region combination to be matched based on the coordinates of the first edge line and the coordinates of the second edge line, and taking the first edge line and the second edge line, the matching degree of which is not less than a matching degree threshold value, as the same target edge line combination; calculating a final edge coordinate representation corresponding to the same target edge line combination based on the first coordinate representation of the first edge line and the second coordinate representation of the second edge line contained in the unified edge line combination;
When an obstacle exists in the first captured image, judging whether a corresponding area in the second captured image is an obstacle or not based on a final edge coordinate representation corresponding to a first edge line corresponding to the obstacle, if so, judging that the mapping area is provided with the obstacle, otherwise, judging that the mapping area is not provided with the obstacle;
when no obstacle exists in the first captured image, judging whether the obstacle exists in the second captured image or not based on final edge coordinate representations corresponding to all the same target edge line combinations in the second captured image, if yes, judging that the obstacle exists in the mapping area, otherwise, judging that the obstacle does not exist in the mapping area;
s2: and (3) measuring result statistics: obtaining a plurality of groups of mapping results, and counting result data respectively for the results measured and drawn by a plurality of different measuring methods;
s3: and (3) data operation: respectively obtaining a plan view and a three-dimensional stereogram of a building and a topography of the statistical multi-group result data, obtaining a distance through modeling calculation after obtaining the plan view and the three-dimensional stereogram, and carrying out average value fusion on the calculated distance to obtain a multi-group fusion average value;
s4: and (3) data fusion calculation: and (3) sorting the fusion average values of the plurality of groups into a group of data, and calculating the average value of the fusion values in the group of data to finally obtain the domestic land area average value closest to the accurate area.
Preferably, for the measurement of the designated area in S1, further comprising:
and detecting the area in the mapping area according to uninterrupted electromagnetic waves, wherein the area of the area where the mapping is detected is subjected to electromagnetic wave reflection rebound action, filtering specific band frequencies in the reflected electromagnetic waves through filtering during rebound, and carrying out digital conversion of rebound action through rebound distance between electromagnetic wave emission and rebound.
Preferably, for obtaining multiple sets of mapping results in S2, the method further includes:
and respectively receiving the result data measured and drawn by the geodetic method, the aerial photogrammetry and the mapping measurement method.
Preferably, for the obtaining of the plan view and the three-dimensional perspective view of the building and the terrain in S3, the method further includes:
inputting the plan view and the three-dimensional stereogram of the building and the topography into a convolutional neural network for training to respectively obtain a first characteristic image and a second characteristic image corresponding to the plan view and the three-dimensional stereogram of the building and the topography;
and constructing a digital three-dimensional model corresponding to the mapping image based on the first feature image and the second feature image, respectively calculating according to the electromagnetic wave distance between the models constructed by the first feature image and the second feature image, performing multiple fusion calculation on multiple groups of calculation results in each measurement method, obtaining a fusion average value after the calculation, and obtaining three groups of data fusion average values in total by three groups of measurement modes.
Preferably, step S2 includes:
text processing is carried out on a plurality of groups of mapping results to obtain a plurality of mapping texts;
word segmentation processing is carried out on the mapping text based on a word segmentation algorithm, so that a word set is obtained;
constructing a filtering database, and filtering the word set based on the filtering database to obtain a filtered word set;
performing de-duplication treatment on the filtering word set to obtain a target word set;
determining the target word set based on the utilization frame, obtaining the values of T1 utilization benefit indexes, and calculating the corresponding utilization capacity value of the target word set:
wherein Nl is the value of the utilization capacity corresponding to the target word set, and Nl epsilon (0, 1); sys1 is the value of the obtained s 1-th utilization benefit index; cys1 is the value of the s1 st utilization benefit index based on utilization framework prediction, cys1 ε (0, 1); sys2 is the trigger value of the s2 th utilization benefit index in the utilization frame, sys 2E (0.5, 1);
screening out a target word set with the utilization capacity value larger than a preset threshold value as key data;
and counting according to the key data.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the homeland mapping method based on the multi-metadata fusion, GPS positioning is used for positioning geographic coordinates of longitude and latitude of a mapping area, whether an obstacle exists in the mapping area or not can be detected, if the obstacle is detected, an image is transmitted to a terminal through 4G/5G by a camera device of a measuring instrument, area analysis is carried out on the obstacle, and then land mapping is carried out on the mapping area through a detection instruction transmitting unit.
2. According to the homeland mapping method based on the multi-metadata fusion, the multi-group measurement data can be obtained by adopting various mapping modes according to the geodetic method, the aerial photogrammetry and the map construction measurement method, the final homeland mapping data can be more accurate through later data operation, and the aerial photogrammetry adopts a comprehensive method, a comprehensive method and a division method, so that the measured data accuracy is respectively optimized.
3. The invention provides a homeland mapping method based on multi-metadata fusion, wherein electromagnetic waves are used for carrying out area detection in a mapping area and mainly adopting an electromagnetic wave ranging method for carrying out distance mapping, the electromagnetic wave ranging method is characterized in that electromagnetic waves are used as carrier waves, the modulated electromagnetic waves are emitted from one end of a measuring line and are reflected or transferred back from the other end of the measuring line, the time between an electromagnetic wave emitting module and an electromagnetic wave rebound module is measured by the distance measuring method, and finally the measured distance is subjected to digital conversion by a distance conversion module.
4. According to the homeland mapping method based on the multi-element data fusion, a convolutional neural network is input for training according to the plan view and the three-dimensional stereogram of the building and the topography during measurement, a first characteristic map and a second characteristic map corresponding to the plan view and the three-dimensional stereogram of the building and the topography are obtained after training, a digital stereomodel corresponding to a mapping image is built according to the obtained first characteristic map and the obtained second characteristic map for accurate distance calculation, a group of data of fusion average values is obtained after calculation, the geodetic measurement, aerial photogrammetry and map programming measurement respectively, the average value calculation is carried out on the three groups of data of fusion average values again, final specific data of homeland mapping of a selected area is finally obtained, and the obtained final data are fused and calculated in batches.
Drawings
FIG. 1 is a flowchart illustrating the overall steps of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to solve the problem that in the prior art, when mapping the land area, the mapping method is too single, and the final calculation data is not repeatedly calculated, so that the data is inaccurate, please refer to fig. 1, the present embodiment provides the following technical scheme:
the homeland mapping method based on the multi-metadata fusion comprises the following steps:
s1, measuring instruction: sending an instruction command, positioning geographic coordinates of longitude and latitude of a mapping area by using a GPS, measuring the area in a target in a designated area by three different measuring methods, wherein the measuring times of each of the three methods are not less than three times, and checking obstacles in the mapping area during measurement, and the GPS further comprises: whether the survey area is provided with an obstacle or not is checked through a measuring instrument, an image pickup device is arranged on the measuring instrument, an image capturing can be carried out on a picture in front of the image pickup device, a judgment result obtained by judging whether the obstacle exists in a captured image is marked in the captured image, a first captured image is obtained, the first captured image is transmitted to a terminal through 4G/5G, and the measuring instrument further comprises: the infrared thermal imager converts invisible infrared energy emitted by the object into a visible thermal image, and then performs edge capturing on the thermal image to obtain a second captured image;
Checking whether an obstacle exists in the mapping area based on the first captured image and the second captured image; wherein capturing an image of a screen in front of an image pickup device and marking a determination result obtained by determining whether an obstacle exists in the captured image to obtain a first captured image, comprising: capturing images of pictures in front of the image pickup device to obtain an original image;
acquiring a preset number of historical original images without barriers in front of the original images, and determining all edge lines in the historical original images based on an edge detection algorithm; determining the tangential direction and curvature of each point on the edge line on the corresponding edge line, taking the tangential direction as a vector direction, taking the curvature as a module of the vector, and determining a first shape representation vector of the corresponding edge line on the corresponding point;
determining a region to be identified in the historical original image based on the edge line, determining the center coordinate of the region to be identified, and taking a vector pointing to a corresponding point on the edge line from the center coordinate as a second shape representation vector of the corresponding point; determining a final shape characterization vector of the corresponding edge line at the corresponding point based on the first characterization vector and the second characterization vector; summarizing the final shape representation vectors of all points on the corresponding edge lines to obtain a shape representation vector set, taking the shape representation vector set corresponding to each edge line in the historical original image as first matching data, and taking the central coordinates of the to-be-identified area corresponding to the corresponding edge line as second matching data; based on the matching data of the edge lines, the edge lines in different historical original images are matched, and an edge line set with successful matching is determined; taking the average value of the matching data of all edge lines in the successfully matched edge line set as the standard matching data of the corresponding edge line set; determining all target edge lines in the original image and target matching data of the target edge lines, and matching the target matching data with standard matching data to obtain a target matching result;
Judging whether an unmatched target edge line exists in the original image or not based on a target matching result, if so, determining the position coordinate of the target edge line in the original image, judging whether the target edge line is an obstacle based on the position coordinate, if so, taking the obstacle existing in the original image as a corresponding judging result, marking the target edge line judged as the obstacle in the original image to obtain a first captured image, otherwise, taking the obstacle not existing in the original image as a corresponding judging result;
when the unmatched target edge line does not exist in the original image, taking the fact that no obstacle exists in the original image as a corresponding judgment result; the matching data comprises first matching data and second matching data;
the method for checking whether the obstacle exists in the mapping area based on the first captured image and the second captured image comprises the following steps: marking edge lines contained in the first captured image on the second captured image, obtaining first edge lines in the second captured image, and taking the edge lines determined after edge capturing as second edge lines;
determining a first region in a second captured image of the first edge line and a second region in the second captured image of the second edge line, determining the region overlapping degree between each first region and each second region, and combining the first region and the second region with the region overlapping degree not smaller than an overlapping degree threshold value as regions to be matched; determining the matching degree of the first edge line and the second edge line in the region combination to be matched based on the coordinates of the first edge line and the coordinates of the second edge line, and taking the first edge line and the second edge line, the matching degree of which is not less than a matching degree threshold value, as the same target edge line combination; calculating a final edge coordinate representation corresponding to the same target edge line combination based on the first coordinate representation of the first edge line and the second coordinate representation of the second edge line contained in the unified edge line combination;
When an obstacle exists in the first captured image, judging whether a corresponding area in the second captured image is an obstacle or not based on a final edge coordinate representation corresponding to a first edge line corresponding to the obstacle, if so, judging that the mapping area is provided with the obstacle, otherwise, judging that the mapping area is not provided with the obstacle;
when no obstacle exists in the first captured image, judging whether the obstacle exists in the second captured image or not based on final edge coordinate representations corresponding to all the same target edge line combinations in the second captured image, if yes, judging that the obstacle exists in the mapping area, otherwise, judging that the obstacle does not exist in the mapping area;
the method comprises the steps of judging the area overlapping degree of edge lines of an obstacle captured in a first captured image and edge lines identified in a second captured image, further determining edge line combinations needing to be matched, determining the same target edge line combination formed by the first edge line and the second edge line belonging to the same object area in the second captured image based on edge line matching degree calculation, determining corresponding final edge coordinate representation based on the same target edge line combination, and achieving secondary investigation and judgment of the obstacle in a mapping area based on the final edge coordinate representation, so that the obstacle identification result is more accurate.
S2: and (3) measuring result statistics: obtaining a plurality of groups of mapping results, and counting result data respectively for the results measured and drawn by a plurality of different measuring methods;
text processing is carried out on a plurality of groups of mapping results to obtain a plurality of mapping texts; a group of mapping results correspond to one mapping text, so that text processing is realized, and data statistics is convenient;
s3: and (3) data operation: respectively obtaining a plan view and a three-dimensional stereogram of a building and a topography of the statistical multi-group result data, obtaining a distance through modeling calculation after obtaining the plan view and the three-dimensional stereogram, and carrying out average value fusion on the calculated distance to obtain a multi-group fusion average value;
the convolutional neural network is input to train according to the measured plan view and the three-dimensional stereogram of the building and the topography, and the neural network train can perform parameter optimization on parameters of the plan view and the three-dimensional stereogram through a data back propagation mechanism, so that the accuracy of parameter data of the plan view and the three-dimensional stereogram is better;
s4: and (3) data fusion calculation: the fusion average values of the multiple groups are arranged into a group of data, average value calculation is carried out on the fusion values in the group of data, and finally, the land area average value closest to the accurate area is obtained;
The data fusion calculation is carried out on the three groups of fusion average value data again to calculate an average value, final specific data of the territorial mapping of the selected area is finally obtained, and the data fusion calculation is carried out on the data, so that the calculation efficiency of the data center can be greatly improved, and the final data result is more accurate.
Specifically, the matching data includes first matching data and second matching data;
in this embodiment, the original image is an image obtained after capturing an image of a screen in front of the image capturing apparatus.
In this embodiment, the preset number is the number of preset acquired historical original images.
In this embodiment, the historical original image is the original image that was acquired before the original image.
In this embodiment, the edge detection algorithm is, for example, a Canny operator-based edge detection algorithm.
In this embodiment, the tangential direction is the direction in which the tangential line points in the abscissa positive direction and/or the ordinate positive direction in the preset coordinate system.
In this embodiment, the first shape characterization vector is a vector that uses the tangential direction as the vector direction and uses the curvature as a modulus of the vector, and determines a vector that characterizes the local shape feature of the corresponding edge line at the corresponding point.
In this embodiment, the region to be identified is a region surrounded by the edge line in the original image of the history.
In this embodiment, the center coordinate is the average value of the coordinates of all points in the area to be identified.
In this embodiment, the second shape characterization vector is a vector pointing from the center coordinates to the corresponding point on the edge line.
In this embodiment, the final shape token vector is the sum of the first token vector and the second token vector.
In this embodiment, the final shape characterization vector of the corresponding edge line at the corresponding point is determined based on the first characterization vector and the second characterization vector, which is:
and taking the sum of the first characterization vector and the second characterization vector as a final shape characterization vector of the corresponding edge line at the corresponding point.
In this embodiment, the shape representation vector set is a set obtained by integrating the final shape representation vectors of all points on the corresponding edge line.
In this embodiment, the first matching data is a set of shape representation vectors corresponding to the corresponding edge lines.
In this embodiment, the second matching data is the center coordinates of the to-be-identified area corresponding to the corresponding edge line.
In this embodiment, based on the matching data of the edge lines, the edge lines in different historical original images are matched, and the successfully matched edge line set is determined, namely:
And taking the average value of the first matching degree between the first matching data of different edge lines and the second matching degree between the corresponding second matching data as the average matching degree, determining different edge lines with the average matching degree not smaller than a matching degree threshold as successfully matched edge lines, and summarizing the successfully matched edge lines to obtain an edge line set.
In this embodiment, the edge line set is a set obtained by integrating edge lines successfully matched with each other in different historical original images.
In this embodiment, the matching data is the first matching data corresponding to the edge line and the second matching data corresponding to the edge line.
In this embodiment, the standard matching data is the average value of the matching data of all edge lines in the edge line set that is successfully matched.
In this embodiment, the target edge line is an edge line included in the original image determined based on the edge detection algorithm.
In this embodiment, the target matching data is first matching data (i.e., a set of shape representation vectors of the target edge line, the acquisition mode is the same as that of the set of shape representation vectors of the acquired edge line) and second matching data (i.e., the center coordinates of the region corresponding to the target edge line) of the target edge line.
In this embodiment, the target matching result is a result obtained after matching the target matching data and the standard matching data, and includes a target edge line in the original image, which is successfully matched with an edge line in the history original image.
In this embodiment, the position coordinates are coordinate values of each point on the edge line of the object in the original image.
In this embodiment, whether the target edge line is an obstacle is determined based on the position coordinates, that is:
and determining an obstacle region in the original image, judging whether the position coordinates are in the obstacle region, if so, judging that the target edge line is an obstacle, otherwise, judging that the target edge line is not the obstacle.
The beneficial effects of the technology are as follows: the method comprises the steps of determining the tangential direction and curvature of each point on an edge line in a historical original image obtained before the original image in the edge line and the vector of the central coordinate of the edge line pointing to a corresponding point, determining the shape representation vector representing the edge line characteristic, collecting the central coordinate of the corresponding area of the edge line, obtaining the matching data of the corresponding edge line, realizing the matching of different edge lines in different historical original images based on the matching data of the edge lines in different historical original images, further determining all the edge lines belonging to the same area in all the historical original images based on the matching result, averaging the matching data of the edge lines in the same area, determining the standard matching data corresponding to the same area, matching the target matching data corresponding to the edge lines in the original image with the standard matching data, judging whether the image area which does not appear before appears in the original image, judging whether the image area is an obstacle based on the coordinate position, and realizing the accurate checking and judging of the obstacle.
In an embodiment, the checking of whether there is an obstacle in the mapping area based on the first captured image and the second captured image includes:
marking edge lines contained in the first captured image on the second captured image, obtaining first edge lines in the second captured image, and taking the edge lines determined after edge capturing as second edge lines;
determining a first region in a second captured image of the first edge line and a second region in the second captured image of the second edge line, determining the region overlapping degree between each first region and each second region, and combining the first region and the second region with the region overlapping degree not smaller than an overlapping degree threshold value as regions to be matched;
determining the matching degree of the first edge line and the second edge line in the region combination to be matched based on the coordinates of the first edge line and the coordinates of the second edge line, and taking the first edge line and the second edge line, the matching degree of which is not less than a matching degree threshold value, as the same target edge line combination;
calculating a final edge coordinate representation corresponding to the same target edge line combination based on the first coordinate representation of the first edge line and the second coordinate representation of the second edge line contained in the unified edge line combination;
When an obstacle exists in the first captured image, judging whether a corresponding area in the second captured image is an obstacle or not based on a final edge coordinate representation corresponding to a first edge line corresponding to the obstacle, if so, judging that the mapping area is provided with the obstacle, otherwise, judging that the mapping area is not provided with the obstacle;
and when no obstacle exists in the first captured image, judging whether the obstacle exists in the second captured image or not based on final edge coordinate representations corresponding to all the same target edge line combinations in the second captured image, if so, judging that the obstacle exists in the mapping area, otherwise, judging that the obstacle does not exist in the mapping area.
In this embodiment, the first edge line is an edge line in the second captured image obtained by marking the edge line included in the first captured image with the second captured image.
In this embodiment, the second edge line is the edge line determined after the edge capturing in the thermal image.
In this embodiment, the first region is a region surrounded by the first edge line in the second captured image.
In this embodiment, the second area is an area surrounded by the second edge line in the second capturing head portrait.
In this embodiment, the area overlapping degree between each first area and each second area is determined, which is:
and determining the area of the overlapped area between the first area and the second area, and taking the average value between the first ratio of the area of the overlapped area to the area of the first area and the second ratio of the area of the overlapped area to the area of the second area as the corresponding area overlapping degree.
In this embodiment, the region overlapping degree is a value representing the region overlapping degree between the first region and the second region.
In this embodiment, the overlapping degree threshold is a preset minimum overlapping degree of the regions corresponding to the first region and the second region when the first region and the second region are defined as the region to be matched.
In this embodiment, the region to be matched is a first region and a second region whose region overlapping degree is not less than the overlapping degree threshold.
In this embodiment, the matching degree of the first edge line and the second edge line in the to-be-matched region combination is determined based on the coordinates of the first edge line and the coordinates of the second edge line, which is:
and calculating the ratio of the coordinate difference value (horizontal coordinate difference value and vertical coordinate difference value) of each point in the first edge line and each point in the second edge line to the corresponding coordinate value (horizontal coordinate value and vertical coordinate value) of the corresponding point in the first edge, determining the difference value of 1 and the ratio as the corresponding coordinate contact ratio, and taking the ratio of the coordinate contact ratio of each point in the first edge line and each point in the second edge line to the product of the total number of points in the first edge line and the total number of points in the second edge line as the matching degree of the first edge line and the second edge line.
In this embodiment, the matching degree threshold is the minimum matching degree corresponding to the first edge line and the second edge line when they are combined as the same target edge line.
In this embodiment, based on the first coordinate representation of the first edge line and the second coordinate representation of the second edge line included in the unified edge line combination, a final edge coordinate representation corresponding to the same target edge line combination is calculated, that is,:
and taking an average value of the first coordinate representation of the first edge line and the second coordinate representation of the second edge line contained in the same edge line combination as a final edge coordinate representation corresponding to the same target edge line combination.
In this embodiment, the final edge coordinate representation is the average of the first coordinate representation of the first edge line and the second coordinate representation of the second edge line comprised in the same edge line combination.
In this embodiment, based on the final edge coordinate representation corresponding to the first edge line corresponding to the obstacle, it is determined whether the final edge coordinate representation corresponds to the area in the second captured image is an obstacle, that is:
and determining an obstacle region in the second captured image, judging whether a first edge line of the corresponding obstacle overlaps the obstacle region based on the final edge coordinate representation, if so, judging that the region corresponding to the final edge coordinate representation in the second captured image is the obstacle, otherwise, judging that the region corresponding to the final edge coordinate representation in the second captured image is not the obstacle.
In this embodiment, based on the final edge coordinate representation corresponding to all the same target edge line combinations in the second captured image, it is determined whether there is an obstacle in the second captured image, that is:
and determining an obstacle region in the second captured image, judging whether final edge coordinate representations corresponding to all the same target edge line combinations overlap the obstacle region, if so, judging that the final edge coordinate representation corresponds to the region in the second captured image as an obstacle, otherwise, judging that the final edge coordinate representation corresponds to the region in the second captured image as not an obstacle.
The beneficial effects of the technology are as follows: the edge line of the obstacle captured in the first captured image and the edge line identified in the second captured image are subjected to regional overlapping degree judgment, the edge line combination needing to be matched is determined, the same target edge line combination formed by the first edge line and the second edge line belonging to the same object region is determined in the second captured image based on edge line matching degree calculation, the corresponding final edge coordinate representation is determined based on the same target edge line combination, secondary investigation and judgment of the obstacle in the mapping region is realized based on the final edge coordinate representation, and then the obstacle identification result is more accurate.
For a plurality of measurement methods, further comprising:
measuring according to a geodetic method, an aerial photogrammetry and a map making measurement method, measuring the land area of the area by determining the ground point position, the shape and the size of the area and the gravity field of the area, and mapping the longitude and latitude, the ground feature and the landform of the mapping area by a broken point mapping method and a large scale mapping method;
measurement within a designated area, further comprising:
and detecting the area in the mapping area according to uninterrupted electromagnetic waves, wherein the area of the area where the mapping is detected is subjected to electromagnetic wave reflection rebound action, filtering specific band frequencies in the reflected electromagnetic waves through filtering during rebound, and carrying out digital conversion of rebound action through rebound distance between electromagnetic wave emission and rebound.
Obtaining the multiple sets of mapping results, further comprising:
and respectively receiving the result data measured and drawn by the geodetic method, the aerial photogrammetry and the mapping measurement method.
Specifically, the aerial photogrammetry measures the land area of a mapping area through a synthesis method, a comprehensive method and a division method; the comprehensive method is a mapping method combining photogrammetry and flat panel instrument measurement. The planar position of the ground feature and the landform on the topographic map is corrected by the photo correction method to obtain the photo map or the line drawing, and the elevation and the contour of the topographic points are measured in the field by a common measuring method. The method is suitable for large scale mapping in flat areas; the universal method is to set up a geometric model of the photographed ground in a stereo image-to-stereo mapping instrument according to the principle of geometric inversion in the photographing process, so as to map the topographic map. Because no control point is added in the relative orientation process of the image pair, only the inherent geometric characteristics of the image pair are utilized, the orientation of the established geometric model is arbitrary, and the scale of the model is also an approximate value, so that the image can be measured by absolute orientation; the division method is also called differential method, and is a method for measuring the graph according to the principle of plane and elevation division. The main instrument used is a stereo meter. The device is an instrument for measuring left-right parallax ratio and drawing contour lines on a right photo according to a vertical photographic image pair. And then carrying out zonal projection transfer drawing on contour lines drawn on the photo and the transfer-drawn ground objects by using a projection transfer drawing instrument, wherein electromagnetic wave ranging is carried out by taking electromagnetic waves as carrier waves, modulating the electromagnetic waves and then transmitting the electromagnetic waves out from one end of a measuring line, reflecting or transferring the electromagnetic waves back from the other end of the measuring line, and finally carrying out digital conversion on the measured distance by using a distance conversion module by using the method of measuring the distance between the electromagnetic wave transmitting end and the electromagnetic wave rebound section.
In order to solve the problem that in the prior art, after measurement and acquisition, mapped data are directly adopted, calculation for optimizing the data is not performed, calculation fusion is not further performed on the mapped land area data, and data are inaccurate, please refer to fig. 1, and the embodiment provides the following technical scheme:
the method for obtaining the plan view and the three-dimensional perspective view of the building and the terrain further comprises the following steps:
inputting the plan view and the three-dimensional stereogram of the building and the topography into a convolutional neural network for training to respectively obtain a first characteristic image and a second characteristic image corresponding to the plan view and the three-dimensional stereogram of the building and the topography;
and constructing a digital three-dimensional model corresponding to the mapping image based on the first feature image and the second feature image, respectively calculating according to the electromagnetic wave distance between the models constructed by the first feature image and the second feature image, performing multiple fusion calculation on multiple groups of calculation results in each measurement method, obtaining a fusion average value after the calculation, and obtaining three groups of data fusion average values in total by three groups of measurement modes.
Specifically, three different measured data results are received, preliminary fusion average value calculation is carried out on the three measured results through data calculation after the three measured results are received, geodetic measurement, aerial photogrammetry and map construction measurement are respectively carried out for a plurality of times, the measured data of each time are calculated to obtain an average value, a convolutional neural network is input according to a plan view and a three-dimensional stereogram of a building and a topography during measurement for training, a first characteristic diagram and a second characteristic diagram corresponding to the plan view and the three-dimensional stereogram of the building and topography are obtained after training, accurate distance calculation is carried out on a digital stereomodel corresponding to a mapping image constructed according to the obtained first characteristic diagram and the second characteristic diagram, a group of fusion average value data is respectively obtained through geodetic measurement, aerial photogrammetry and map construction measurement after calculation, the calculation of the average value is carried out again on the three groups of fusion average value data, and finally final specific data of the land mapping of a selected area is obtained.
In one embodiment, step S2 includes:
text processing is carried out on a plurality of groups of mapping results to obtain a plurality of mapping texts;
word segmentation processing is carried out on the mapping text based on a word segmentation algorithm, so that a word set is obtained;
constructing a filtering database, and filtering the word set based on the filtering database to obtain a filtered word set;
performing de-duplication treatment on the filtering word set to obtain a target word set;
determining the target word set based on the utilization frame, obtaining the values of T1 utilization benefit indexes, and calculating the corresponding utilization capacity value of the target word set:
wherein Nl is the value of the utilization capacity corresponding to the target word set, and Nl epsilon (0, 1); sys1 is the value of the obtained s 1-th utilization benefit index; cys1 is the value of the s1 st utilization benefit index based on utilization framework prediction, cys1 ε (0, 1); sys2 is the trigger value of the s2 th utilization benefit index in the utilization frame, sys 2E (0.5, 1);
screening out a target word set with the utilization capacity value larger than a preset threshold value as key data;
and counting according to the key data.
The technical scheme has the working principle and beneficial effects that: text processing is carried out on a plurality of groups of mapping results to obtain a plurality of mapping texts; a group of mapping results correspond to one mapping text, so that text processing is realized, and data statistics is convenient to carry out.
Word segmentation processing is carried out on the mapping text based on a word segmentation algorithm, so that a word set is obtained; all words included in the mapping text are facilitated to be determined.
Constructing a filtering database, and filtering the word set based on the filtering database to obtain a filtered word set; the filtering database is a database composed of required filtering words, so that some words without meaning can be conveniently screened out, and the statistical efficiency of the following data is affected.
Performing de-duplication treatment on the filtering word set to obtain a target word set; the repeated words are conveniently screened out, the number of data processing is reduced, and the data processing efficiency is improved.
Determining the target word set based on the utilization frame, obtaining the values of T1 utilization benefit indexes, and calculating the corresponding utilization capacity value of the target word set: the utilization capacity of the target word set is conveniently quantified by utilizing the neural model which is obtained by training based on historical data by utilizing the framework, so that whether the target word set is key data or not is conveniently determined. The numerical value of the utilization capacity corresponding to the target word set is accurately calculated based on the formula, and the accuracy of judgment and the preset threshold value is improved. The utilization benefit index comprises a benefit parameter corresponding to utilization time, a benefit parameter corresponding to utilization times and a benefit parameter corresponding to a utilization scene.
Screening out a target word set with the utilization capacity value larger than a preset threshold value as key data; and counting according to the key data. The method is convenient for quickly determining key data in a plurality of groups of mapping results and carrying out result statistics, and improves the efficiency of data statistics. The preset threshold is 0.5, which is a value set based on multiple experiments.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (4)

1. The homeland mapping method based on the multi-element data fusion is characterized by comprising the following steps of: the method comprises the following steps:
s1, measuring instruction: sending an instruction command, positioning geographic coordinates of longitude and latitude of a mapping area by using a GPS, measuring the designated area of the area in a target by three different measuring methods, wherein the measuring times of each of the three methods are not less than three times, and checking obstacles in the mapping area during measurement;
three different measurement methods are used among them:
measuring according to a geodetic method, an aerial photogrammetry and a map making measurement method, measuring the land area of the area by determining the ground point position, the shape and the size of the area and the gravity field of the area, and mapping the longitude and latitude, the ground feature and the landform of the mapping area by a broken point mapping method and a large scale mapping method;
wherein, GPS still includes when location survey and drawing: whether the survey area is provided with an obstacle or not is checked through a measuring instrument, an image pickup device is arranged on the measuring instrument, an image capturing can be carried out on a picture in front of the image pickup device, a judgment result obtained by judging whether the obstacle exists in a captured image is marked in the captured image, a first captured image is obtained, the first captured image is transmitted to a terminal through 4G/5G, and the measuring instrument further comprises: the infrared thermal imager converts invisible infrared energy emitted by an object into a visible thermal image, and then performs edge capturing on the thermal image to obtain a second captured image, and whether the mapping area is provided with obstacles or not is checked based on the first captured image and the second captured image; wherein capturing an image of a screen in front of an image pickup device and marking a determination result obtained by determining whether an obstacle exists in the captured image to obtain a first captured image, comprising: capturing images of pictures in front of the image pickup device to obtain an original image;
Acquiring a preset number of historical original images without barriers in front of the original images, and determining all edge lines in the historical original images based on an edge detection algorithm; determining the tangential direction and curvature of each point on the edge line on the corresponding edge line, taking the tangential direction as a vector direction, taking the curvature as a module of the vector, and determining a first shape representation vector of the corresponding edge line on the corresponding point;
determining a region to be identified in the historical original image based on the edge line, determining the center coordinate of the region to be identified, and taking a vector pointing to a corresponding point on the edge line from the center coordinate as a second shape representation vector of the corresponding point; determining a final shape characterization vector of the corresponding edge line at the corresponding point based on the first characterization vector and the second characterization vector; summarizing the final shape representation vectors of all points on the corresponding edge lines to obtain a shape representation vector set, taking the shape representation vector set corresponding to each edge line in the historical original image as first matching data, and taking the central coordinates of the to-be-identified area corresponding to the corresponding edge line as second matching data; based on the matching data of the edge lines, the edge lines in different historical original images are matched, and an edge line set with successful matching is determined; taking the average value of the matching data of all edge lines in the successfully matched edge line set as the standard matching data of the corresponding edge line set; determining all target edge lines in the original image and target matching data of the target edge lines, and matching the target matching data with standard matching data to obtain a target matching result;
Judging whether an unmatched target edge line exists in the original image or not based on a target matching result, if so, determining the position coordinate of the target edge line in the original image, judging whether the target edge line is an obstacle based on the position coordinate, if so, taking the obstacle existing in the original image as a corresponding judging result, marking the target edge line judged as the obstacle in the original image to obtain a first captured image, otherwise, taking the obstacle not existing in the original image as a corresponding judging result;
when the unmatched target edge line does not exist in the original image, taking the fact that no obstacle exists in the original image as a corresponding judgment result; the matching data comprises first matching data and second matching data;
the method for checking whether the obstacle exists in the mapping area based on the first captured image and the second captured image comprises the following steps: marking edge lines contained in the first captured image on the second captured image, obtaining first edge lines in the second captured image, and taking the edge lines determined after edge capturing as second edge lines;
determining a first region in a second captured image of the first edge line and a second region in the second captured image of the second edge line, determining the region overlapping degree between each first region and each second region, and combining the first region and the second region with the region overlapping degree not smaller than an overlapping degree threshold value as regions to be matched; determining the matching degree of the first edge line and the second edge line in the region combination to be matched based on the coordinates of the first edge line and the coordinates of the second edge line, and taking the first edge line and the second edge line, the matching degree of which is not less than a matching degree threshold value, as the same target edge line combination; calculating a final edge coordinate representation corresponding to the same target edge line combination based on the first coordinate representation of the first edge line and the second coordinate representation of the second edge line contained in the unified edge line combination;
When an obstacle exists in the first captured image, judging whether a corresponding area in the second captured image is an obstacle or not based on a final edge coordinate representation corresponding to a first edge line corresponding to the obstacle, if so, judging that the mapping area is provided with the obstacle, otherwise, judging that the mapping area is not provided with the obstacle;
when no obstacle exists in the first captured image, judging whether the obstacle exists in the second captured image or not based on final edge coordinate representations corresponding to all the same target edge line combinations in the second captured image, if yes, judging that the obstacle exists in the mapping area, otherwise, judging that the obstacle does not exist in the mapping area;
s2: and (3) measuring result statistics: obtaining a plurality of groups of mapping results, and counting result data respectively for the results measured and drawn by a plurality of different measuring methods;
s3: and (3) data operation: respectively obtaining a plan view and a three-dimensional stereogram of a building and a topography of the statistical multi-group result data, obtaining a distance through modeling calculation after obtaining the plan view and the three-dimensional stereogram, and carrying out average value fusion on the calculated distance to obtain a multi-group fusion average value;
the average value fusion is to obtain data of a group of fusion average values after geodetic measurement, aerial photogrammetry and map programming measurement calculation;
S4: and (3) data fusion calculation: the fusion average values of a plurality of groups are arranged into a group of data, and average value calculation is carried out on a plurality of fusion values in the group of data;
and the average value calculation of the fusion values is to calculate the average value again by three groups of fusion average value data of geodetic measurement, aerial photogrammetry and map construction measurement through data fusion calculation, and finally obtain the average value of the territorial area closest to the precise area.
2. The method for homeland mapping based on multi-data fusion of claim 1, wherein: obtaining a plurality of groups of mapping results in the S2, and further comprising:
and respectively receiving the result data measured and drawn by the geodetic method, the aerial photogrammetry and the mapping measurement method.
3. The method for homeland mapping based on multi-data fusion of claim 1, wherein: aiming at the acquisition of the plan view and the three-dimensional perspective view of the building and the terrain in the step S3, the method further comprises the following steps:
inputting the plan view and the three-dimensional stereogram of the building and the topography into a convolutional neural network for training to respectively obtain a first characteristic image and a second characteristic image corresponding to the plan view and the three-dimensional stereogram of the building and the topography;
And constructing a digital three-dimensional model corresponding to the mapping image based on the first feature image and the second feature image, calculating the distance according to the acquired digital three-dimensional model, performing multiple fusion calculation according to the calculation result, obtaining a fusion average value after the calculation, and obtaining three groups of data fusion average values in total by three groups of measurement modes.
4. The method for homeland mapping based on multi-data fusion of claim 1, wherein: step S2, including:
text processing is carried out on a plurality of groups of mapping results to obtain a plurality of mapping texts;
word segmentation processing is carried out on the mapping text based on a word segmentation algorithm, so that a word set is obtained;
constructing a filtering database, and filtering the word set based on the filtering database to obtain a filtered word set;
performing de-duplication treatment on the filtering word set to obtain a target word set;
determining the target word set based on the utilization frame, obtaining the values of T1 utilization benefit indexes, and calculating the corresponding utilization capacity value of the target word set:
wherein Nl is the value of the utilization capacity corresponding to the target word set, and Nl epsilon (0, 1); sys1 is the value of the obtained s 1-th utilization benefit index; cys1 is the value of the s1 st utilization benefit index based on utilization framework prediction, cys1 ε (0, 1); sys2 is the trigger value of the s2 th utilization benefit index in the utilization frame, sys 2E (0.5, 1);
Screening out a target word set with the utilization capacity value larger than a preset threshold value as key data;
and counting according to the key data.
CN202310426537.3A 2023-04-20 2023-04-20 Homeland mapping method based on multi-metadata fusion Active CN116147567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310426537.3A CN116147567B (en) 2023-04-20 2023-04-20 Homeland mapping method based on multi-metadata fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310426537.3A CN116147567B (en) 2023-04-20 2023-04-20 Homeland mapping method based on multi-metadata fusion

Publications (2)

Publication Number Publication Date
CN116147567A CN116147567A (en) 2023-05-23
CN116147567B true CN116147567B (en) 2023-07-21

Family

ID=86362211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310426537.3A Active CN116147567B (en) 2023-04-20 2023-04-20 Homeland mapping method based on multi-metadata fusion

Country Status (1)

Country Link
CN (1) CN116147567B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778105B (en) * 2023-08-17 2023-11-21 云南高阳科技有限公司 Fusion modeling method based on multi-precision three-dimensional mapping data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021093418A1 (en) * 2019-11-12 2021-05-20 深圳创维数字技术有限公司 Ground obstacle detection method and device, and computer-readable storage medium
WO2022252380A1 (en) * 2021-06-04 2022-12-08 魔门塔(苏州)科技有限公司 Multi-frame fusion method and apparatus for grounding contour line of stationary obstacle, and medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299244B (en) * 2014-09-26 2017-07-25 东软集团股份有限公司 Obstacle detection method and device based on monocular camera
JP2019207170A (en) * 2018-05-30 2019-12-05 Necソリューションイノベータ株式会社 Land area measuring device, measuring method, program, and recording medium
CN109631750B (en) * 2018-12-18 2021-07-20 深圳市沃特沃德股份有限公司 Surveying and mapping method, surveying and mapping device, computer equipment and storage medium
CN109883320B (en) * 2019-01-22 2021-01-08 湖南土流信息有限公司 Land area measuring method and system
CN110132238B (en) * 2019-05-09 2021-12-17 抚顺市规划勘测设计研究院有限公司 Unmanned aerial vehicle surveying and mapping method for terrain image digital elevation model
CN110645969A (en) * 2019-10-29 2020-01-03 广州国苑规划设计有限公司 Land surveying and mapping method for homeland planning
CN113108690A (en) * 2021-04-06 2021-07-13 成都网感科技有限公司 Area measurement method suitable for large-area land measurement
CN113360587B (en) * 2021-06-16 2022-07-05 湖南航天智远科技有限公司 Land surveying and mapping equipment and method based on GIS technology
CN114493333A (en) * 2022-02-14 2022-05-13 湖北铭思远工程咨询有限公司 Real-time collection and processing method for land surveying and mapping operation data
CN114998276B (en) * 2022-06-14 2023-06-09 中国矿业大学 Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
CN115164814A (en) * 2022-07-18 2022-10-11 广东精地规划科技有限公司 Method for measuring land area through smart phone

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021093418A1 (en) * 2019-11-12 2021-05-20 深圳创维数字技术有限公司 Ground obstacle detection method and device, and computer-readable storage medium
WO2022252380A1 (en) * 2021-06-04 2022-12-08 魔门塔(苏州)科技有限公司 Multi-frame fusion method and apparatus for grounding contour line of stationary obstacle, and medium

Also Published As

Publication number Publication date
CN116147567A (en) 2023-05-23

Similar Documents

Publication Publication Date Title
Liu et al. Image‐based crack assessment of bridge piers using unmanned aerial vehicles and three‐dimensional scene reconstruction
CN105358937A (en) Positioning method for a surveying instrument and said surveying instrument
CN104134216B (en) The laser point cloud autoegistration method described based on 16 dimensional features and system
CN116147567B (en) Homeland mapping method based on multi-metadata fusion
CN112634340A (en) Method, device, equipment and medium for determining BIM (building information modeling) model based on point cloud data
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN113132717A (en) Data processing method, terminal and server
CN114863380B (en) Lane line identification method and device and electronic equipment
CN115578539B (en) Indoor space high-precision visual position positioning method, terminal and storage medium
CN114611635B (en) Object identification method and device, storage medium and electronic device
CN111739099B (en) Falling prevention method and device and electronic equipment
CN110851978B (en) Camera position optimization method based on visibility
CN115690746A (en) Non-blind area sensing method and system based on vehicle-road cooperation
CN114608522B (en) Obstacle recognition and distance measurement method based on vision
CN115345945A (en) Automatic inspection method and system for reconstructing expressway by using multi-view vision of unmanned aerial vehicle
CN113160292B (en) Laser radar point cloud data three-dimensional modeling device and method based on intelligent mobile terminal
WO2022121024A1 (en) Unmanned aerial vehicle positioning method and system based on screen optical communication
CN113532424A (en) Integrated equipment for acquiring multidimensional information and cooperative measurement method
US20240185595A1 (en) Method for evaluating quality of point cloud map based on matching
CN116385994A (en) Three-dimensional road route extraction method and related equipment
CN114742876B (en) Land vision stereo measurement method
CN113190564A (en) Map updating system, method and device
CN114266830B (en) Underground large space high-precision positioning method
CN115240091A (en) Method and device for quickly detecting rice panicle grains
CN114783181A (en) Traffic flow statistical method and device based on roadside perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant