CN115100266A - Digital airport model construction method, system and equipment based on neural network - Google Patents
Digital airport model construction method, system and equipment based on neural network Download PDFInfo
- Publication number
- CN115100266A CN115100266A CN202211015870.7A CN202211015870A CN115100266A CN 115100266 A CN115100266 A CN 115100266A CN 202211015870 A CN202211015870 A CN 202211015870A CN 115100266 A CN115100266 A CN 115100266A
- Authority
- CN
- China
- Prior art keywords
- airport
- target
- metadata
- neural network
- digital
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 25
- 238000010276 construction Methods 0.000 title claims description 6
- 238000000034 method Methods 0.000 claims abstract description 54
- 238000000605 extraction Methods 0.000 claims abstract description 33
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims abstract description 12
- 230000000007 visual effect Effects 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 13
- 238000012549 training Methods 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000010365 information processing Effects 0.000 claims description 3
- 239000000463 material Substances 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 4
- 238000004590 computer program Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 3
- 239000012535 impurity Substances 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30184—Infrastructure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Graphics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Geometry (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the field of data processing, and particularly relates to a method, a system and equipment for constructing a digital airport model in a neural network, aiming at solving the problem that the existing method for establishing an airport model by manual carving needs to be adjusted manually and has complicated technology. The invention comprises the following steps: acquiring a high-resolution satellite map and a depth camera image of an airport environment to be modeled; performing information matching on the high-resolution satellite map and the depth camera image, and converting the high-resolution satellite map and the depth camera image into target airport metadata; based on the target airport metadata, performing feature extraction through a convolutional neural network based on the metadata to obtain target features; and integrating to obtain a digital airport model based on the target characteristics. The method avoids the influence of various types of interference in one image on model restoration, and improves the accuracy and visual experience of model restoration.
Description
Technical Field
The invention belongs to the field of data processing, and particularly relates to a method, a system and equipment for constructing a digital airport model based on a neural network.
Background
With the development of traffic technology, taking an airplane has become one of the main travel modes of people, and in order to realize digital management and planning of airplanes in an airport, a real airport image needs to be constructed into a visual and digital model. The existing airport model building method needs to input each model of an airport into a model in a manual carving mode according to a preset size, high requirements are placed on the modeling level of technicians, different airports need to be redrawn, and if environmental changes occur, the airport model building method needs to be manually changed again by the technicians, so that the airport model building method is complex in technology, easy to make mistakes and incapable of being updated in real time. A method for airport modeling is provided that enables automated, real-time updating of airport images into visualizations.
Disclosure of Invention
In order to solve the above problems in the prior art, that is, the problems that the existing artificial carving airport modeling method needs manual adjustment technology, is tedious, is easy to make mistakes, and cannot be updated in real time, the invention provides a digital airport model construction method based on a neural network, which comprises the following steps:
s100, acquiring a high-resolution satellite map and a depth camera image of an airport environment to be modeled;
step S200, carrying out information matching on the high-resolution satellite map and the depth camera image, and converting the high-resolution satellite map and the depth camera image into target airport metadata;
step S300, based on the target airport metadata, extracting features through a convolutional neural network based on the metadata to obtain a target feature set;
and S400, integrating and obtaining a digital airport model based on the target feature set.
In some preferred embodiments, the convolutional neural network based on slice metadata is trained by:
step A100, acquiring standard airport metadata with description feature labels for each preset geographic description type;
a200, copying a plurality of standard airport metadata of each geographic description type label and simultaneously inputting a plurality of basic feature extraction sub-networks; wherein each basic feature extraction sub-network only extracts one geographic description type;
step A300, extracting a target feature through a basic feature extraction sub-network;
step A400, calculating the target characteristics and calculating a loss function;
and step A500, repeating iteration until the preset training times or the loss function is lower than a preset threshold value, wherein the set of all the basic feature extraction sub-networks is a convolutional neural network based on metadata, and the target features extracted by all the basic feature extraction sub-networks are target feature set basic feature extraction sub-networks.
In some preferred embodiments, the step S400 specifically includes: and matching the target slicing characteristics with the target airport metadata, assigning corresponding geographic information materials to positions corresponding to the data points, and generating a visual digital airport model.
In some preferred embodiments, the target feature is matched with the target airport metadata, specifically:
and according to the target feature set, confirming feature types contained in the target airport metadata, and confirming the components of the geographic description types contained in each target airport metadata through a weight learning model.
In some preferred embodiments, the step S200 specifically includes:
step S210, based on the high-resolution satellite map and the depth camera image, adding position information to the high-resolution satellite map by a feature point matching method to obtain a high-resolution space image;
and step S220, generating target airport metadata based on the high-resolution space image.
In some preferred embodiments, the method further includes a step of slicing the target airport metadata, specifically, the target airport slicing metadata is obtained by cutting the target airport metadata by a preset size through QGIS geographic information processing software, and the subsequent step is performed with the airport slicing metadata as the target airport metadata.
In another aspect of the present invention, a digital airport model building system based on a neural network is provided, the system includes:
the system comprises an image acquisition module, a data processing module and a data processing module, wherein the image acquisition module is configured to acquire a high-resolution satellite map and a depth camera image of an airport environment to be modeled;
the data conversion module is configured to perform information matching on the high-resolution satellite map and the depth camera image and convert the high-resolution satellite map and the depth camera image into target airport metadata;
the data cutting module is configured to cut according to geographic information based on the target airport metadata to obtain a plurality of target airport slices;
the characteristic extraction module is configured to extract characteristics through a convolutional neural network based on slice metadata based on the target airport slice to obtain target slice characteristics;
and the environment restoration module is configured to integrate and obtain the digital airport model based on the target slice characteristics.
In a third aspect of the present invention, an electronic device is provided, including:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein,
the memory stores instructions executable by the processor for execution by the processor to implement the neural network-based digital airport model building method described above.
In a fourth aspect of the present invention, a computer-readable storage medium is provided, which stores computer instructions for being executed by the computer to implement the above-mentioned method for constructing a digital airport model based on a neural network.
The invention has the beneficial effects that:
(1) according to the method, the corresponding feature extraction network is independently arranged for each geographic information type, so that the accuracy of extracting the features of the complex image is improved, the influence of various types of interference on model restoration in one image is avoided, and the accuracy and visual experience of model restoration are improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a schematic flow chart of a method for constructing a digital airport model based on a neural network according to an embodiment of the present invention;
FIG. 2 is a block diagram of a computer system of a server for implementing embodiments of the method, system, and apparatus of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The invention provides a method for constructing a digital airport model based on a neural network, which improves the accuracy of extracting the characteristics of complex images by independently setting corresponding characteristic extraction networks for each geographic information type, avoids the influence of various types of interference in one image on model restoration, and improves the accuracy and visual experience of model restoration.
The invention discloses a digital airport model construction method based on a neural network, which comprises the following steps:
s100, acquiring a high-resolution satellite map and a depth camera image of an airport environment to be modeled;
step S200, carrying out information matching on the high-resolution satellite map and the depth camera image, and converting the high-resolution satellite map and the depth camera image into target airport metadata;
step S300, based on the target airport metadata, extracting features through a convolutional neural network based on the metadata to obtain a target feature set;
and S400, integrating to obtain a digital airport model based on the target feature set.
In order to more clearly explain the method for constructing a digital airport model based on a neural network according to the present invention, the following describes in detail the steps in the embodiment of the present invention with reference to fig. 1.
The method for constructing the digital airport model based on the neural network comprises the following steps S100-S400, and the steps are described in detail as follows:
and S100, acquiring a high-resolution satellite map and a depth camera image of the airport environment to be modeled.
And S200, performing information matching on the high-resolution satellite map and the depth camera image, and converting the high-resolution satellite map and the depth camera image into target airport metadata.
In this embodiment, the step S200 specifically includes:
step S210, based on the high-resolution satellite map and the depth camera image, adding position information to the high-resolution satellite map by a feature point matching method to obtain a high-resolution space image;
and step S220, generating target airport metadata based on the high-resolution space image.
In this embodiment, the method further comprises the step of roughly dividing the airport metadata into hierarchical continuous regions;
converting the target airport metadata into a gray image, and calculating the longitudinal gradient and the transverse gradient of each pixel point;
setting pixel points with longitudinal gradients or transverse gradients larger than a preset boundary threshold value as boundary points;
dividing the gray-scale image into a plurality of image blocks by the boundary points;
the image blocks of the airport runway are found out by carrying out feature matching on the image blocks and the images of the airport runway, the image blocks are set as a primary segmentation area, and other image blocks are set as a secondary segmentation area; other image blocks needing attention can be set into a plurality of partition areas according to preset requirements;
aiming at the secondary segmentation area, scanning is carried out through a hollow sliding frame with a preset size, and the centers of the hollow sliding frames sequentially traverse pixels in the image; if the hollow sliding frame is arranged to be a square with 1 pixel in the center, the size of the hollow sliding frame is 3 x 3 pixels, and the thickness of the hollow sliding frame is 1 pixel; hollow sliding frames of different sizes, such as hollow of 2 × 2, 2 × 3, 3 × 3, 1 × 3 … … pixels, can also be provided for the corresponding divided regions as required;
judging whether the gray level difference values of the pixels covered by the hollow sliding frame are all within a preset uniform range, if so, judging whether the gray level difference values of the pixels in the center of the hollow sliding frame and the pixels in the frame are larger than a preset impurity threshold, and if so, filling the average pixels covered by the frame into the center pixels; and acquiring the airport metadata with impurities reduced after traversal.
In the process of drawing the airport model, images of the runway need to be accurately analyzed and divided, but the same precision does not need to be achieved for different other areas, such as identifying the existence of gravels in a series of grasslands and weeds mixed in a continuous tree area; if the method is still used for recognition and feature extraction according to the running precision, not only is the recognition difficulty increased and the resource consumption increased, but also the feature extraction mode of a basic feature extraction sub-network is independently designed for each feature, and the complexity of feature integration is increased.
In this embodiment, the method further includes a step of slicing the target airport metadata, specifically, the target airport slicing metadata is obtained by cutting the target airport metadata by a preset size through QGIS geographic information processing software, and the subsequent step is performed by using the airport slicing metadata as the target airport metadata. And step S300, based on the target airport metadata, performing feature extraction through a convolutional neural network based on the metadata to obtain a target feature set. (ii) a
In this embodiment, the training method of the convolutional neural network based on slice metadata is as follows:
step A100, acquiring standard airport metadata with description feature tags for each preset geographic description type;
a200, copying a plurality of standard airport metadata of each geographic description type label and simultaneously inputting a plurality of basic feature extraction sub-networks; wherein each basic feature extraction sub-network only extracts one geographic description type;
step A300, extracting a target feature through a basic feature extraction sub-network;
step A400, calculating the target characteristics and calculating a loss function;
and step A500, repeating iteration until the preset training times or the loss function is lower than a preset threshold value, wherein the set of all the basic feature extraction sub-networks is a convolutional neural network based on metadata, and the target features extracted by all the basic feature extraction sub-networks are target feature set basic feature extraction sub-networks.
In a traditional image feature extraction method, a plurality of different types of geographic information may appear in each region, the obtained features are classified to obtain the probability that the slice belongs to a certain geographic description type, and then the corresponding position is generated into the most likely type when model visualization simulation is performed. By training each type of feature extraction model independently, namely the basic feature extraction sub-network, the method can extract each type of features in the slice, and finally, the features are integrated according to the weight of the basic feature extraction sub-network, so that the influence of other types of features in the same image can be avoided, and finally, the integrated model has higher visual experience and accuracy.
And S400, integrating and obtaining a digital airport model based on the target feature set.
In this embodiment, the step S400 specifically includes: and matching the target slicing features with the target airport metadata, assigning corresponding geographic information materials to the positions corresponding to the data points, and generating a visual digital airport model.
In this embodiment, matching the target feature with the target airport metadata specifically includes:
and according to the target feature set, confirming feature types contained in the target airport metadata, and confirming the components of geographic description types contained in each target airport metadata through a weight learning model.
The invention can be automatically generated only by inputting the satellite image and the depth image together.
Although the foregoing embodiments describe the steps in the above sequential order, those skilled in the art will understand that, in order to achieve the effect of the present embodiments, the steps may not be executed in such an order, and may be executed simultaneously (in parallel) or in an inverse order, and these simple variations are within the scope of the present invention.
The digital airport model building system based on the neural network of the second embodiment of the invention comprises:
the system comprises an image acquisition module, a data processing module and a data processing module, wherein the image acquisition module is configured to acquire a high-resolution satellite map and a depth camera image of an airport environment to be modeled;
the data conversion module is configured to perform information matching on the high-resolution satellite map and the depth camera image and convert the high-resolution satellite map and the depth camera image into target airport metadata;
the data cutting module is configured to cut according to geographic information based on the target airport metadata to obtain a plurality of target airport slices;
the characteristic extraction module is configured to extract characteristics through a convolutional neural network based on slice metadata based on the target airport slice to obtain target slice characteristics;
the environment restoration module is configured to integrate and obtain a digital airport model based on the target slice characteristics, so that the automation level of modeling is improved, the problem that manual operation is prone to errors is solved, when the environment changes, such as construction and other conditions, updating can be completed only by inputting new images, and convenience is brought to the real-time performance of airport digital management.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
It should be noted that, the digital airport model building system based on the neural network provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical applications, the above functions may be allocated to different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the above embodiment may be combined into one module, or may be further split into a plurality of sub-modules, so as to complete all or part of the above described functions. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
An electronic apparatus according to a third embodiment of the present invention includes: at least one processor; and a memory communicatively coupled to at least one of the processors; wherein the memory stores instructions executable by the processor for execution by the processor to implement the neural network-based digital airport model building method described above.
A computer-readable storage medium of a fourth embodiment of the present invention stores computer instructions for being executed by the computer to implement the method for constructing a digital airport model based on neural network described above.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art will appreciate that the various illustrative modules, method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the software modules, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Reference is now made to FIG. 2, which is a block diagram illustrating a computer system of a server configured to implement embodiments of the present methods, systems, and apparatus. The server shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 2, the computer system includes a Central Processing Unit (CPU)201 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 202 or a program loaded from a storage section 208 into a Random Access Memory (RAM) 203. In the RAM203, various programs and data necessary for system operation are also stored. The CPU 201, ROM 202, and RAM203 are connected to each other via a bus 204. An Input/Output (I/O) interface 205 is also connected to bus 204.
The following components are connected to the I/O interface 205: an input portion 206 including a keyboard, a mouse, and the like; an output section 207 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 208 including a hard disk and the like; and a communication section 209 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 209 performs communication processing via a network such as the internet. A drive 210 is also connected to the I/O interface 205 as needed. A removable medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 210 as necessary, so that the computer program read out therefrom is mounted into the storage section 208 as necessary.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 209 and/or installed from the removable medium 211. The above-described functions defined in the method of the present application are performed when the computer program is executed by the Central Processing Unit (CPU) 201. It should be noted that the computer readable medium mentioned above in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
Claims (9)
1. A digital airport model construction method based on a neural network is characterized by comprising the following steps:
s100, acquiring a high-resolution satellite map and a depth camera image of an airport environment to be modeled;
step S200, carrying out information matching on the high-resolution satellite map and the depth camera image, and converting the high-resolution satellite map and the depth camera image into target airport metadata;
step S300, based on the target airport metadata, performing feature extraction through a convolutional neural network based on the metadata to obtain a target feature set;
and S400, integrating and obtaining a digital airport model based on the target feature set.
2. The method for constructing the digital airport model based on the neural network as claimed in claim 1, wherein the convolutional neural network based on the slice metadata is trained by:
step A100, acquiring standard airport metadata with description feature labels for each preset geographic description type;
a200, copying a plurality of standard airport metadata of each geographic description type label and simultaneously inputting a plurality of basic feature extraction sub-networks; wherein each basic feature extraction sub-network only extracts one geographic description type;
step A300, extracting a target feature through a basic feature extraction sub-network;
step A400, calculating the target characteristics and calculating a loss function;
and step A500, repeating iteration until the preset training times or the loss function is lower than a preset threshold value, wherein the set of all the basic feature extraction sub-networks is a convolutional neural network based on metadata, and the target features extracted by all the basic feature extraction sub-networks are target feature set basic feature extraction sub-networks.
3. The method for constructing a digital airport model based on neural network as claimed in claim 2, wherein said step S400 specifically comprises: and matching the target slicing features with the target airport metadata, assigning corresponding geographic information materials to the positions corresponding to the data points, and generating a visual digital airport model.
4. The method according to claim 3, wherein the matching of the target features with the target airport metadata is specifically:
and according to the target feature set, confirming feature types contained in the target airport metadata, and confirming the components of the geographic description types contained in each target airport metadata through a weight learning model.
5. The method for constructing a digital airport model based on neural network as claimed in claim 1, wherein said step S200 specifically comprises:
step S210, based on the high-resolution satellite map and the depth camera image, adding position information to the high-resolution satellite map by a feature point matching method to obtain a high-resolution space image;
and step S220, generating target airport metadata based on the high-resolution space image.
6. The method for constructing a digital airport model based on neural network as claimed in claim 1, further comprising a step of slicing the target airport metadata, specifically, cutting the target airport metadata by a preset size through QGIS geographic information processing software to obtain target airport slicing metadata, and performing subsequent steps by using the airport slicing metadata as the target airport metadata.
7. A digital airport model building system based on neural network, the system comprising:
the system comprises an image acquisition module, a data processing module and a data processing module, wherein the image acquisition module is configured to acquire a high-resolution satellite map and a depth camera image of an airport environment to be modeled;
the data conversion module is configured to perform information matching on the high-resolution satellite map and the depth camera image and convert the high-resolution satellite map and the depth camera image into target airport metadata;
the feature extraction module is configured to extract features through a convolutional neural network based on metadata based on the metadata of the target airport to obtain a target feature set;
and the environment restoration module is configured to integrate and obtain the digital airport model based on the target feature set.
8. An electronic device, comprising: at least one processor; and a memory communicatively coupled to at least one of the processors; wherein the memory stores instructions executable by the processor for execution by the processor to implement the neural network-based digitized airport model building method of any of claims 1-6.
9. A computer-readable storage medium storing computer instructions for execution by the computer to implement the neural network-based digitized airport model building method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211015870.7A CN115100266B (en) | 2022-08-24 | 2022-08-24 | Method, system and equipment for constructing digital airport model based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211015870.7A CN115100266B (en) | 2022-08-24 | 2022-08-24 | Method, system and equipment for constructing digital airport model based on neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115100266A true CN115100266A (en) | 2022-09-23 |
CN115100266B CN115100266B (en) | 2022-12-06 |
Family
ID=83299853
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211015870.7A Active CN115100266B (en) | 2022-08-24 | 2022-08-24 | Method, system and equipment for constructing digital airport model based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115100266B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190213413A1 (en) * | 2015-08-31 | 2019-07-11 | Cape Analytics, Inc. | Systems and methods for analyzing remote sensing imagery |
WO2019179024A1 (en) * | 2018-03-20 | 2019-09-26 | 平安科技(深圳)有限公司 | Method for intelligent monitoring of airport runway, application server and computer storage medium |
CN110991359A (en) * | 2019-12-06 | 2020-04-10 | 重庆市地理信息和遥感应用中心(重庆市测绘产品质量检验测试中心) | Satellite image target detection method based on multi-scale depth convolution neural network |
CN112287904A (en) * | 2020-12-15 | 2021-01-29 | 北京道达天际科技有限公司 | Airport target identification method and device based on satellite images |
US20210201570A1 (en) * | 2019-12-26 | 2021-07-01 | Electronics And Telecommunications Research Institute | Method and apparatus for generating digital surface model using satellite imagery |
WO2022000838A1 (en) * | 2020-07-03 | 2022-01-06 | 南京莱斯信息技术股份有限公司 | Markov random field-based method for labeling remote control tower video target |
-
2022
- 2022-08-24 CN CN202211015870.7A patent/CN115100266B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190213413A1 (en) * | 2015-08-31 | 2019-07-11 | Cape Analytics, Inc. | Systems and methods for analyzing remote sensing imagery |
WO2019179024A1 (en) * | 2018-03-20 | 2019-09-26 | 平安科技(深圳)有限公司 | Method for intelligent monitoring of airport runway, application server and computer storage medium |
CN110991359A (en) * | 2019-12-06 | 2020-04-10 | 重庆市地理信息和遥感应用中心(重庆市测绘产品质量检验测试中心) | Satellite image target detection method based on multi-scale depth convolution neural network |
US20210201570A1 (en) * | 2019-12-26 | 2021-07-01 | Electronics And Telecommunications Research Institute | Method and apparatus for generating digital surface model using satellite imagery |
WO2022000838A1 (en) * | 2020-07-03 | 2022-01-06 | 南京莱斯信息技术股份有限公司 | Markov random field-based method for labeling remote control tower video target |
CN112287904A (en) * | 2020-12-15 | 2021-01-29 | 北京道达天际科技有限公司 | Airport target identification method and device based on satellite images |
Non-Patent Citations (2)
Title |
---|
么嘉棋 等: "联合深度学习和条件随机场的遥感影像云检测", 《测绘科学》 * |
张一民 等: "基于视觉显著性和卷积神经网络的机场目标快速检测", 《航天返回与遥感》 * |
Also Published As
Publication number | Publication date |
---|---|
CN115100266B (en) | 2022-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Turker et al. | Building‐based damage detection due to earthquake using the watershed segmentation of the post‐event aerial images | |
CN109359170B (en) | Method and apparatus for generating information | |
CN114742272A (en) | Soil cadmium risk prediction method based on space-time interaction relation | |
CN112990086B (en) | Remote sensing image building detection method and device and computer readable storage medium | |
CN111626947B (en) | Map vectorization sample enhancement method and system based on generation of countermeasure network | |
Koeva et al. | Towards innovative geospatial tools for fit-for-purpose land rights mapping | |
CN114758337B (en) | Semantic instance reconstruction method, device, equipment and medium | |
CN114187412B (en) | High-precision map generation method and device, electronic equipment and storage medium | |
CN114241326B (en) | Progressive intelligent production method and system for ground feature elements of remote sensing images | |
CN113011350A (en) | Method and device for recognizing and processing regional image and electronic equipment | |
CN111104850B (en) | Remote sensing image building automatic extraction method and system based on residual error network | |
Tarsha Kurdi et al. | Automatic evaluation and improvement of roof segments for modelling missing details using Lidar data | |
CN112287056A (en) | Navigation management visualization method and device, electronic equipment and storage medium | |
CN116563728A (en) | Optical remote sensing image cloud and fog removing method and system based on generation countermeasure network | |
CN109657728B (en) | Sample production method and model training method | |
CN113298042B (en) | Remote sensing image data processing method and device, storage medium and computer equipment | |
CN111832358A (en) | Point cloud semantic analysis method and device | |
CN113158856B (en) | Processing method and device for extracting target area in remote sensing image | |
CN115100266B (en) | Method, system and equipment for constructing digital airport model based on neural network | |
CN112906648A (en) | Method and device for classifying objects in land parcel and electronic equipment | |
CN110309237A (en) | A kind of method and apparatus updating map | |
CN116976115A (en) | Remote sensing satellite application demand simulation method and device oriented to quantitative analysis and judgment | |
CN109376638B (en) | Text-to-ground rate calculation method based on remote sensing image and geographic information system | |
US20230104674A1 (en) | Machine learning techniques for ground classification | |
CN115346081A (en) | Power transmission line point cloud data classification method based on multi-data fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |