CN115100266B - Method, system and equipment for constructing digital airport model based on neural network - Google Patents

Method, system and equipment for constructing digital airport model based on neural network Download PDF

Info

Publication number
CN115100266B
CN115100266B CN202211015870.7A CN202211015870A CN115100266B CN 115100266 B CN115100266 B CN 115100266B CN 202211015870 A CN202211015870 A CN 202211015870A CN 115100266 B CN115100266 B CN 115100266B
Authority
CN
China
Prior art keywords
airport
metadata
target
image
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211015870.7A
Other languages
Chinese (zh)
Other versions
CN115100266A (en
Inventor
王杰
郝德月
刘岩
杨树
吴林
汤芯怡
赵思媛
胡婕
阮文新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Xiangyi Aviation Technology Co Ltd
Original Assignee
Zhuhai Xiangyi Aviation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Xiangyi Aviation Technology Co Ltd filed Critical Zhuhai Xiangyi Aviation Technology Co Ltd
Priority to CN202211015870.7A priority Critical patent/CN115100266B/en
Publication of CN115100266A publication Critical patent/CN115100266A/en
Application granted granted Critical
Publication of CN115100266B publication Critical patent/CN115100266B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of data processing, and particularly relates to a method, a system and equipment for constructing a digital airport model in a neural network, aiming at solving the problem that the existing method for establishing an airport model by manual carving needs to be adjusted manually and has complicated technology. The invention comprises the following steps: acquiring a high-resolution satellite map and a depth camera image of an airport environment to be modeled; performing information matching on the high-resolution satellite map and the depth camera image, and converting the high-resolution satellite map and the depth camera image into target airport metadata; based on the target airport metadata, performing feature extraction through a convolutional neural network based on the metadata to obtain target features; and integrating to obtain a digital airport model based on the target characteristics. The method avoids the influence of various types of interference in one image on model restoration, and improves the accuracy and visual experience of model restoration.

Description

Digital airport model construction method, system and equipment based on neural network
Technical Field
The invention belongs to the field of data processing, and particularly relates to a method, a system and equipment for constructing a digital airport model based on a neural network.
Background
With the development of traffic technology, taking an airplane has become one of the main travel modes of people, and in order to realize digital management and planning of airplanes in an airport, a real airport image needs to be constructed into a visual and digital model. The existing airport model building method needs to input each model of an airport into a model in a manual carving mode according to a preset size, high requirements are placed on the modeling level of technicians, different airports need to be redrawn, and if environmental changes occur, the airport model building method needs to be manually changed again by the technicians, so that the airport model building method is complex in technology, easy to make mistakes and incapable of being updated in real time. A method for airport modeling is provided that enables automated, real-time updating of airport images into visualizations.
Disclosure of Invention
In order to solve the above problems in the prior art, that is, the problems that the existing artificial carving airport modeling method needs manual adjustment technology, is tedious, is easy to make mistakes, and cannot be updated in real time, the invention provides a digital airport model construction method based on a neural network, which comprises the following steps:
s100, acquiring a high-resolution satellite map and a depth camera image of an airport environment to be modeled;
step S200, carrying out information matching on the high-resolution satellite map and the depth camera image, and converting the high-resolution satellite map and the depth camera image into target airport metadata;
step S300, based on the target airport metadata, extracting features through a convolutional neural network based on the metadata to obtain a target feature set;
and S400, integrating and obtaining a digital airport model based on the target feature set.
In some preferred embodiments, the convolutional neural network based on slice metadata is trained by:
step A100, acquiring standard airport metadata with description feature labels for each preset geographic description type;
a200, copying a plurality of standard airport metadata of each geographic description type label and simultaneously inputting a plurality of basic feature extraction sub-networks; wherein each basic feature extraction sub-network only extracts one geographic description type;
step A300, extracting a target feature through a basic feature extraction sub-network;
step A400, calculating the target characteristics and calculating a loss function;
step A500, repeating iteration until the preset training times or the loss function is lower than a preset threshold value, wherein the set of all the basic feature extraction subnetworks is a convolutional neural network based on metadata, and the target features extracted by all the basic feature extraction subnetworks are target feature set basic feature extraction subnetworks.
In some preferred embodiments, the step S400 specifically includes: and matching the target slicing features with the target airport metadata, assigning corresponding geographic information materials to the positions corresponding to the data points, and generating a visual digital airport model.
In some preferred embodiments, the target feature is matched with the target airport metadata, specifically:
and according to the target feature set, confirming feature types contained in the target airport metadata, and confirming the components of geographic description types contained in each target airport metadata through a weight learning model.
In some preferred embodiments, the step S200 specifically includes:
step S210, based on the high-resolution satellite map and the depth camera image, adding position information to the high-resolution satellite map by a feature point matching method to obtain a high-resolution space image;
and step S220, generating target airport metadata based on the high-resolution space image.
In some preferred embodiments, the method further comprises a step of slicing the target airport metadata, specifically, slicing the target airport metadata by a preset size through QGIS geographic information processing software to obtain target airport slice metadata, and performing subsequent steps by using the airport slice metadata as the target airport metadata.
In another aspect of the present invention, a digital airport model building system based on a neural network is provided, the system includes:
the system comprises an image acquisition module, a data processing module and a data processing module, wherein the image acquisition module is configured to acquire a high-resolution satellite map and a depth camera image of an airport environment to be modeled;
the data conversion module is configured to perform information matching on the high-resolution satellite map and the depth camera image and convert the high-resolution satellite map and the depth camera image into target airport metadata;
the data cutting module is configured to cut according to geographic information based on the target airport metadata to obtain a plurality of target airport slices;
the characteristic extraction module is configured to extract characteristics through a convolutional neural network based on slice metadata based on the target airport slices to obtain target slice characteristics;
and the environment restoration module is configured to integrate and obtain the digital airport model based on the target slice characteristics.
In a third aspect of the present invention, an electronic device is provided, including:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein the content of the first and second substances,
the memory stores instructions executable by the processor for execution by the processor to implement the neural network-based digital airport model building method described above.
In a fourth aspect of the present invention, a computer-readable storage medium is provided, and the computer-readable storage medium stores computer instructions for being executed by the computer to implement the above-mentioned method for constructing a digital airport model based on a neural network.
The invention has the beneficial effects that:
(1) According to the method, the corresponding feature extraction network is independently arranged for each geographic information type, so that the accuracy of extracting the features of the complex image is improved, the influence of various types of interference on model restoration in one image is avoided, and the accuracy and visual experience of model restoration are improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a schematic flow chart of a method for constructing a digital airport model based on a neural network according to an embodiment of the present invention;
FIG. 2 is a block diagram of a computer system of a server for implementing embodiments of the method, system, and apparatus of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The invention provides a digital airport model construction method based on a neural network, which improves the accuracy of complex image feature extraction by independently setting a corresponding feature extraction network for each geographic information type, avoids the influence of various types of interference in one image on model restoration, and improves the accuracy and visual experience of model restoration.
The invention discloses a digital airport model construction method based on a neural network, which comprises the following steps:
s100, acquiring a high-resolution satellite map and a depth camera image of an airport environment to be modeled;
step S200, carrying out information matching on the high-resolution satellite map and the depth camera image, and converting the high-resolution satellite map and the depth camera image into target airport metadata;
step S300, based on the target airport metadata, extracting features through a convolutional neural network based on the metadata to obtain a target feature set;
and S400, integrating to obtain a digital airport model based on the target feature set.
In order to more clearly explain the method for constructing a digital airport model based on a neural network according to the present invention, the following describes in detail the steps in the embodiment of the present invention with reference to fig. 1.
The method for constructing the digital airport model based on the neural network in the first embodiment of the invention comprises the following steps S100-S400, and the steps are described in detail as follows:
and S100, acquiring a high-resolution satellite map and a depth camera image of the airport environment to be modeled.
And S200, performing information matching on the high-resolution satellite map and the depth camera image, and converting the high-resolution satellite map and the depth camera image into target airport metadata.
In this embodiment, the step S200 specifically includes:
step S210, based on the high-resolution satellite map and the depth camera image, adding position information to the high-resolution satellite map by a feature point matching method to obtain a high-resolution space image;
and step S220, generating target airport metadata based on the high-resolution space image.
In this embodiment, the method further comprises the step of roughly dividing the airport metadata into hierarchical continuous regions;
converting the target airport metadata into a gray image, and calculating the longitudinal gradient and the transverse gradient of each pixel point;
setting pixel points with longitudinal gradient or transverse gradient larger than a preset boundary threshold value as boundary points;
dividing the gray scale image into a plurality of image blocks by boundary points;
the image blocks of the airport runway are found out by carrying out feature matching on the image blocks and the images of the airport runway, the image blocks are set as a primary segmentation area, and other image blocks are set as a secondary segmentation area; other image blocks needing attention can be set into a plurality of partition areas according to preset requirements;
aiming at the secondary segmentation area, scanning is carried out through a hollow sliding frame with a preset size, and the centers of the hollow sliding frames sequentially traverse pixels in the image; if the hollow sliding frame is arranged to be a square with 1 pixel in the center, the size of the hollow sliding frame is 3 x 3 pixels, and the thickness of the hollow sliding frame is 1 pixel; hollow sliding frames with different sizes can be arranged for the corresponding division areas according to requirements, such as 2 x 2, 2 x 3, 3 x 3, 1 x 3 \8230; hollow of pixels;
judging whether the gray level difference values of the pixels covered by the hollow sliding frame are all within a preset uniform range, if so, judging whether the gray level difference values of the pixels in the center of the hollow sliding frame and the pixels in the frame are larger than a preset impurity threshold, and if so, filling the average pixels covered by the frame into the center pixels; and acquiring the airport metadata with impurities reduced after traversal.
In the process of drawing the airport model, images of a runway need to be accurately analyzed and divided, but the same precision does not need to be achieved for different other areas, for example, broken stones exist in a string of grasslands, and weeds are mixed in a continuous tree area; if the method is still used for recognition and feature extraction according to the running precision, not only is the recognition difficulty increased and the resource consumption increased, but also the feature extraction mode of a basic feature extraction sub-network is independently designed for each feature, and the complexity of feature integration is increased.
In this embodiment, the method further includes a step of slicing the target airport metadata, specifically, the target airport slicing metadata is obtained by cutting the target airport metadata by a preset size through QGIS geographic information processing software, and the subsequent step is performed by using the airport slicing metadata as the target airport metadata. And step S300, based on the target airport metadata, performing feature extraction through a convolutional neural network based on the metadata to obtain a target feature set. (ii) a
In this embodiment, the training method of the convolutional neural network based on slice metadata is as follows:
step A100, acquiring standard airport metadata with description feature tags for each preset geographic description type;
a200, copying a plurality of standard airport metadata of each geographic description type label and simultaneously inputting a plurality of basic feature extraction sub-networks; wherein each basic feature extraction sub-network only extracts one geographic description type;
step A300, extracting a target feature through a basic feature extraction sub-network;
step A400, calculating the target characteristics and calculating a loss function;
and step A500, repeating iteration until the preset training times or the loss function is lower than a preset threshold value, wherein the set of all the basic feature extraction sub-networks is a convolutional neural network based on metadata, and the target features extracted by all the basic feature extraction sub-networks are target feature set basic feature extraction sub-networks.
In a traditional image feature extraction method, a plurality of different types of geographic information may appear in each region, the obtained features are classified to obtain the probability that the slice belongs to a certain geographic description type, and then the corresponding position is generated into the most likely type when model visualization simulation is performed. By independently training the feature extraction model of each type, namely the basic feature extraction sub-network, the method can extract each type of features in the slice, and finally, the features are integrated according to the weight of the basic feature extraction sub-network, so that the influence of other types of features in the same image can be avoided, and the finally integrated model has high visualization experience and accuracy.
And S400, integrating and obtaining a digital airport model based on the target feature set.
In this embodiment, the step S400 specifically includes: and matching the target slicing characteristics with the target airport metadata, assigning corresponding geographic information materials to positions corresponding to the data points, and generating a visual digital airport model.
In this embodiment, matching the target feature with the target airport metadata specifically includes:
and according to the target feature set, confirming feature types contained in the target airport metadata, and confirming the components of geographic description types contained in each target airport metadata through a weight learning model.
The invention can automatically generate the satellite image and the depth image only by inputting the satellite image and the depth image together.
Although the foregoing embodiments have described the steps in the foregoing sequence, those skilled in the art will understand that, in order to achieve the effect of the present embodiment, different steps are not necessarily performed in such a sequence, and may be performed simultaneously (in parallel) or in an inverse sequence, and these simple variations are within the scope of the present invention.
The digital airport model building system based on the neural network of the second embodiment of the invention comprises:
the system comprises an image acquisition module, a data processing module and a data processing module, wherein the image acquisition module is configured to acquire a high-resolution satellite map and a depth camera image of an airport environment to be modeled;
the data conversion module is configured to perform information matching on the high-resolution satellite map and the depth camera image and convert the high-resolution satellite map and the depth camera image into target airport metadata;
the data cutting module is configured to cut according to geographic information based on the target airport metadata to obtain a plurality of target airport slices;
the characteristic extraction module is configured to extract characteristics through a convolutional neural network based on slice metadata based on the target airport slice to obtain target slice characteristics;
the environment restoration module is configured to integrate and obtain a digital airport model based on the target slice characteristics, so that the automation level of modeling is improved, the problem that manual operation is prone to errors is solved, when the environment changes, such as construction and other conditions, updating can be completed only by inputting new images, and convenience is brought to the real-time performance of airport digital management.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
It should be noted that, the digital airport model building system based on the neural network provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical applications, the above functions may be allocated to different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the above embodiment may be combined into one module, or may be further split into a plurality of sub-modules, so as to complete all or part of the above described functions. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
An electronic apparatus according to a third embodiment of the present invention includes: at least one processor; and a memory communicatively coupled to at least one of the processors; wherein the memory stores instructions executable by the processor for execution by the processor to implement the neural network-based digitized airport model building method described above.
A computer-readable storage medium of a fourth embodiment of the present invention stores computer instructions for being executed by the computer to implement the method for constructing a digital airport model based on neural network described above.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative modules, method steps, and modules described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the software modules, method steps may be located in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Referring now to FIG. 2, therein is shown a schematic block diagram of a computer system of a server for implementing embodiments of the method, system, and apparatus of the present application. The server shown in fig. 2 is only an example, and should not bring any limitation to the functions and the use range of the embodiments of the present application.
As shown in fig. 2, the computer system includes a Central Processing Unit (CPU) 201 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 202 or a program loaded from a storage section 208 into a Random Access Memory (RAM) 203. In the RAM203, various programs and data necessary for system operation are also stored. The CPU 201, ROM 202, and RAM203 are connected to each other via a bus 204. An Input/Output (I/O) interface 205 is also connected to bus 204.
The following components are connected to the I/O interface 205: an input portion 206 including a keyboard, a mouse, and the like; an output section 207 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 208 including a hard disk and the like; and a communication section 209 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 209 performs communication processing via a network such as the internet. A drive 210 is also connected to the I/O interface 205 as needed. A removable medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 210 as necessary, so that the computer program read out therefrom is mounted into the storage section 208 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 209 and/or installed from the removable medium 211. The computer program performs the above-described functions defined in the method of the present application when executed by the Central Processing Unit (CPU) 201. It should be noted that the computer readable medium mentioned above in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (8)

1. A digital airport model construction method based on a neural network is characterized by comprising the following steps:
s100, acquiring a high-resolution satellite map and a depth camera image of an airport environment to be modeled;
step S200, carrying out information matching on the high-resolution satellite map and the depth camera image, and converting the high-resolution satellite map and the depth camera image into target airport metadata;
the method also comprises the step of roughly dividing the airport metadata into hierarchical continuous areas;
converting the target airport metadata into a gray image, and calculating the longitudinal gradient and the transverse gradient of each pixel point;
setting pixel points with longitudinal gradient or transverse gradient larger than a preset boundary threshold value as boundary points;
dividing the gray scale image into a plurality of image blocks by boundary points;
the image blocks and the airport runway images are subjected to feature matching, the image blocks of the airport runway are found out and set as primary segmentation areas, and other image blocks are set as secondary segmentation areas; aiming at the secondary segmentation area, scanning is carried out through a hollow sliding frame with a preset size, and the center of the hollow sliding frame sequentially traverses pixels in the image;
judging whether the gray level difference values of the pixels covered by the hollow sliding frame are all within a preset uniform range, if so, judging whether the gray level difference value of the pixels in the center of the hollow sliding frame and the pixels in the frame is larger than a preset impurity threshold value; if the average pixel covered by the frame is larger than the preset uniform range, filling the center pixel with the average pixel covered by the frame; obtaining the metadata of the impurity-reduced target airport after traversing;
step S300, based on the target airport metadata, performing feature extraction through a convolutional neural network based on the metadata to obtain a target feature set;
the training method of the convolutional neural network based on the metadata comprises the following steps:
step A100, acquiring standard airport metadata with description feature labels for each preset geographic description type;
a200, copying a plurality of standard airport metadata of each geographic description type label and simultaneously inputting a plurality of basic feature extraction sub-networks; wherein each basic feature extraction sub-network only extracts one geographic description type;
step A300, extracting a target feature through a basic feature extraction sub-network;
step A400, calculating the target characteristics and calculating a loss function;
step A500, repeating iteration until the preset training times or the loss function is lower than a preset threshold value, wherein the set of all the basic feature extraction sub-networks is a convolutional neural network based on metadata, and the target features extracted by all the basic feature extraction sub-networks are target feature set basic feature extraction sub-networks;
and S400, integrating the weights of the sub-networks according to the basic characteristics based on the target characteristic set to obtain a digital airport model.
2. The method for building a digital airport model based on neural network according to claim 1, wherein said step S400 specifically comprises: and matching the target characteristics with the target airport metadata, assigning corresponding geographic information materials to the positions corresponding to the data points, and generating a visual digital airport model.
3. The method according to claim 2, wherein the matching of the target features with the target airport metadata is specifically:
and according to the target feature set, confirming feature types contained in the target airport metadata, and confirming the components of the geographic description types contained in each target airport metadata through a weight learning model.
4. The method for building a digital airport model based on neural network according to claim 1, wherein said step S200 specifically comprises:
step S210, based on the high-resolution satellite map and the depth camera image, adding position information to the high-resolution satellite map by a feature point matching method to obtain a high-resolution space image;
and step S220, generating target airport metadata based on the high-resolution space image.
5. The method for constructing a digital airport model based on neural network as claimed in claim 1, further comprising a step of slicing the target airport metadata, specifically, cutting the target airport metadata by a preset size through QGIS geographic information processing software to obtain target airport slicing metadata, and performing subsequent steps by using the airport slicing metadata as the target airport metadata.
6. A digital airport model building system based on neural networks, the system comprising:
the system comprises an image acquisition module, a data processing module and a data processing module, wherein the image acquisition module is configured to acquire a high-resolution satellite map and a depth camera image of an airport environment to be modeled;
the data conversion module is configured to perform information matching on the high-resolution satellite map and the depth camera image and convert the high-resolution satellite map and the depth camera image into target airport metadata;
the method also comprises the step of roughly dividing the airport metadata into hierarchical continuous areas;
converting the target airport metadata into a gray image, and calculating the longitudinal gradient and the transverse gradient of each pixel point;
setting pixel points with longitudinal gradient or transverse gradient larger than a preset boundary threshold value as boundary points;
dividing the gray-scale image into a plurality of image blocks by the boundary points;
the image blocks and the airport runway images are subjected to feature matching, the image blocks of the airport runway are found out and set as primary segmentation areas, and other image blocks are set as secondary segmentation areas;
aiming at the secondary segmentation area, scanning is carried out through a hollow sliding frame with a preset size, and the center of the hollow sliding frame sequentially traverses pixels in the image;
judging whether the gray level difference values of the pixels covered by the hollow sliding frame are all within a preset uniform range, if so, judging whether the gray level difference value of the pixels in the center of the hollow sliding frame and the difference value of the pixels in the frame are larger than a preset impurity threshold value; if the average pixel covered by the frame is larger than the preset uniform range, filling the center pixel with the average pixel covered by the frame; obtaining the metadata of the impurity-reduced target airport after traversing;
the feature extraction module is configured to extract features through a convolutional neural network based on metadata based on the metadata of the target airport to obtain a target feature set;
the training method of the convolutional neural network based on the metadata comprises the following steps:
step A100, acquiring standard airport metadata with description feature labels for each preset geographic description type;
a200, copying a plurality of standard airport metadata of each geographic description type label and simultaneously inputting a plurality of basic feature extraction sub-networks; wherein each basic feature extraction sub-network only extracts one geographic description type;
step A300, extracting target features through a basic feature extraction sub-network;
step A400, calculating the target characteristics and calculating a loss function;
step A500, repeating iteration until the preset training times or the loss function is lower than a preset threshold value, wherein the set of all the basic feature extraction sub-networks is a convolutional neural network based on metadata, and the target features extracted by all the basic feature extraction sub-networks are target feature set basic feature extraction sub-networks;
and the environment restoration module is configured to extract the weights of the sub-networks according to the basic features for integration based on the target feature set, and integrate to obtain the digital airport model.
7. An electronic device, comprising: at least one processor; and a memory communicatively coupled to at least one of the processors; wherein the memory stores instructions executable by the processor for execution by the processor to implement the neural network-based digitized airport model building method of any of claims 1-5.
8. A computer-readable storage medium storing computer instructions for execution by the computer to implement the neural network-based digitized airport model building method of any one of claims 1-5.
CN202211015870.7A 2022-08-24 2022-08-24 Method, system and equipment for constructing digital airport model based on neural network Active CN115100266B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211015870.7A CN115100266B (en) 2022-08-24 2022-08-24 Method, system and equipment for constructing digital airport model based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211015870.7A CN115100266B (en) 2022-08-24 2022-08-24 Method, system and equipment for constructing digital airport model based on neural network

Publications (2)

Publication Number Publication Date
CN115100266A CN115100266A (en) 2022-09-23
CN115100266B true CN115100266B (en) 2022-12-06

Family

ID=83299853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211015870.7A Active CN115100266B (en) 2022-08-24 2022-08-24 Method, system and equipment for constructing digital airport model based on neural network

Country Status (1)

Country Link
CN (1) CN115100266B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991359A (en) * 2019-12-06 2020-04-10 重庆市地理信息和遥感应用中心(重庆市测绘产品质量检验测试中心) Satellite image target detection method based on multi-scale depth convolution neural network
CN112287904A (en) * 2020-12-15 2021-01-29 北京道达天际科技有限公司 Airport target identification method and device based on satellite images

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017040691A1 (en) * 2015-08-31 2017-03-09 Cape Analytics, Inc. Systems and methods for analyzing remote sensing imagery
CN108446630B (en) * 2018-03-20 2019-12-31 平安科技(深圳)有限公司 Intelligent monitoring method for airport runway, application server and computer storage medium
KR102610989B1 (en) * 2019-12-26 2023-12-08 한국전자통신연구원 Method and apparatus of generating digital surface model using satellite imagery
CN111814654B (en) * 2020-07-03 2023-01-24 南京莱斯信息技术股份有限公司 Markov random field-based remote tower video target tagging method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991359A (en) * 2019-12-06 2020-04-10 重庆市地理信息和遥感应用中心(重庆市测绘产品质量检验测试中心) Satellite image target detection method based on multi-scale depth convolution neural network
CN112287904A (en) * 2020-12-15 2021-01-29 北京道达天际科技有限公司 Airport target identification method and device based on satellite images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于视觉显著性和卷积神经网络的机场目标快速检测;张一民 等;《航天返回与遥感》;20210630;第42卷(第3期);第117-127段 *
联合深度学习和条件随机场的遥感影像云检测;么嘉棋 等;《测绘科学》;20191231;第44卷(第12期);第121-127段 *

Also Published As

Publication number Publication date
CN115100266A (en) 2022-09-23

Similar Documents

Publication Publication Date Title
US11454500B2 (en) Map feature extraction system for computer map visualizations
Turker et al. Building‐based damage detection due to earthquake using the watershed segmentation of the post‐event aerial images
CN111626947B (en) Map vectorization sample enhancement method and system based on generation of countermeasure network
CN109359170B (en) Method and apparatus for generating information
Soon et al. CityGML modelling for Singapore 3D national mapping
Koeva et al. Towards innovative geospatial tools for fit-for-purpose land rights mapping
CN114742272A (en) Soil cadmium risk prediction method based on space-time interaction relation
CN107832849B (en) Knowledge base-based power line corridor three-dimensional information extraction method and device
CN111104850B (en) Remote sensing image building automatic extraction method and system based on residual error network
CN111346842A (en) Coal gangue sorting method, device, equipment and storage medium
CN114758337B (en) Semantic instance reconstruction method, device, equipment and medium
Su et al. A new hierarchical moving curve-fitting algorithm for filtering lidar data for automatic DTM generation
CN114241326B (en) Progressive intelligent production method and system for ground feature elements of remote sensing images
CN114187412B (en) High-precision map generation method and device, electronic equipment and storage medium
Khayyal et al. Creation and spatial analysis of 3D city modeling based on GIS data
Tarsha Kurdi et al. Automatic evaluation and improvement of roof segments for modelling missing details using Lidar data
CN113706931A (en) Airspace flow control strategy recommendation method and device, electronic equipment and storage medium
CN113780175B (en) Remote sensing identification method for typhoon and storm landslide in high vegetation coverage area
CN109657728B (en) Sample production method and model training method
CN113298042B (en) Remote sensing image data processing method and device, storage medium and computer equipment
CN113158856B (en) Processing method and device for extracting target area in remote sensing image
CN115100266B (en) Method, system and equipment for constructing digital airport model based on neural network
CN112906648A (en) Method and device for classifying objects in land parcel and electronic equipment
CN116976115A (en) Remote sensing satellite application demand simulation method and device oriented to quantitative analysis and judgment
CN109376638B (en) Text-to-ground rate calculation method based on remote sensing image and geographic information system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant