CN118152599A - Intelligent construction method and system for space image service - Google Patents
Intelligent construction method and system for space image service Download PDFInfo
- Publication number
- CN118152599A CN118152599A CN202410326876.9A CN202410326876A CN118152599A CN 118152599 A CN118152599 A CN 118152599A CN 202410326876 A CN202410326876 A CN 202410326876A CN 118152599 A CN118152599 A CN 118152599A
- Authority
- CN
- China
- Prior art keywords
- image
- spatial
- target
- space
- level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010276 construction Methods 0.000 title claims abstract description 41
- 238000000034 method Methods 0.000 claims abstract description 42
- 210000004907 gland Anatomy 0.000 claims abstract description 9
- 230000000875 corresponding effect Effects 0.000 claims description 123
- 238000003860 storage Methods 0.000 claims description 43
- 230000004927 fusion Effects 0.000 claims description 27
- 238000000513 principal component analysis Methods 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 5
- 238000012952 Resampling Methods 0.000 claims description 4
- 230000002596 correlated effect Effects 0.000 claims description 4
- 238000013135 deep learning Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 description 42
- 238000012986 modification Methods 0.000 description 9
- 230000004048 modification Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 230000004044 response Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 4
- 238000012216 screening Methods 0.000 description 4
- 230000032683 aging Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 230000001788 irregular Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000000047 product Substances 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 241000579895 Chlorostilbon Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000010976 emerald Substances 0.000 description 1
- 229910052876 emerald Inorganic materials 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 239000010977 jade Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- ZLIBICFPKPWGIZ-UHFFFAOYSA-N pyrimethanil Chemical compound CC1=CC(C)=NC(NC=2C=CC=CC=2)=N1 ZLIBICFPKPWGIZ-UHFFFAOYSA-N 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/587—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Library & Information Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the specification discloses an intelligent construction method and system for space image service, and relates to the technical field of space image service. Wherein the method comprises the following steps: acquiring input conditions, wherein the input conditions comprise a spatial range, time and a gland relation; determining a first spatial image from an image library based on the input condition; determining a hierarchy corresponding to each first spatial image according to the spatial range and/or the image characteristics of the first spatial image, wherein the image characteristics at least comprise resolution; dividing the first space image according to the levels to obtain a target space image corresponding to each level; and combining the target space images corresponding to each level, and constructing a service address based on the labels corresponding to each level to obtain a service logic data set of each level corresponding to the input condition. By the method, the construction efficiency of the space image service logic data can be greatly improved.
Description
Technical Field
The application relates to the technical field of space image service, in particular to an intelligent construction method and system of space image service.
Background
The space image is image data of ground surface or other targets obtained from a satellite, an airplane or an unmanned plane and other platforms, is an important component of space information science, and is widely applied to the fields of geographic information systems, remote sensing monitoring, navigation positioning, smart cities and the like.
The space image service is a service for providing functions of accessing, inquiring, displaying, analyzing and the like of space image data to a user through a network, and can effectively solve the problems of storing, managing, sharing, utilizing and the like of the space image data and improve the value and benefit of the space image data. The standards commonly used for spatial image services at home and abroad are WMTS (Web MAP TILE SERVICE ) and WMS (Web MAP SERVICE, web map service) of OGC (Open Geospatial Consortium, open geographic space information alliance). In order to improve the image data acquisition efficiency to meet the high concurrency requirement, WMTS standard has become the mainstream.
However, a great deal of labor and time are required for constructing the space image WMTS service data, which is inefficient, and this not only results in that a great deal of space image data of the national high-resolution earth observation system cannot be timely and fully acted, but also may result in a lack of effective data support in the work with high aging requirements on space images, such as various emergency event handling, large-scale facility equipment inspection, natural environment monitoring, and the like.
Therefore, how to provide an intelligent construction method and system for space image service, so that a user can automatically screen space image service data from an image library according to requirements and generate corresponding service addresses, thereby improving individuation and intelligent degree of space image service, and the method and system are technical problems to be solved in the prior art.
Disclosure of Invention
An aspect of an embodiment of the present disclosure provides an intelligent construction method for a spatial image service, including:
acquiring input conditions, wherein the input conditions comprise a spatial range, time and a gland relation;
Determining a first spatial image from an image library based on the input condition;
Determining a hierarchy corresponding to each first spatial image according to the spatial range and/or the image characteristics of the first spatial image, wherein the image characteristics at least comprise resolution;
Dividing the first space image according to the levels to obtain a target space image corresponding to each level;
and combining the target space images corresponding to each level, and constructing a service address based on the labels corresponding to each level to obtain a service logic data set of each level corresponding to the input condition.
In some embodiments, the spatial range is determined based on user input, wherein the user input comprises text input, selection input based on a user interface, or click input.
In some embodiments, each spatial image in the image library is configured with a corresponding metadata table, and metadata corresponding to each spatial image in the metadata table at least includes a coordinate system, a coverage area and a time; wherein the coverage is determined based on an image pyramid;
the determining a first spatial image from an image library based on the input condition comprises: and inquiring the space image matched with the input condition in the metadata table, and taking the space image matched with the input condition as the first space image.
In some embodiments, the level corresponding to the first aerial image is positively correlated with the coverage corresponding to the first aerial image and negatively correlated with the resolution of the first aerial image.
In some embodiments, the combining the target space images corresponding to each level and constructing the service address based on the labels corresponding to each level includes:
For each of the target levels,
Splicing the target space images according to the image characteristics and/or the spatial relation of the target space images, and resampling the spliced images to obtain resampled images;
Slicing the resampled image, and storing a sliced image obtained after slicing in a target file directory;
Taking the level number and time corresponding to the target level as the label of the target file directory;
and constructing a service address corresponding to the target level according to the storage path of the target file directory and the label.
In some embodiments, the stitching the target space image according to the image features and/or the spatial relationship of the target space image includes:
And splicing the target space images according to the coordinate system corresponding to the target space images, and/or splicing the target space images based on an image fusion algorithm.
In some embodiments, the image fusion algorithm comprises: a fusion algorithm based on band selection, a fusion algorithm based on wavelet transformation, a fusion algorithm based on principal component analysis or a fusion algorithm based on deep learning.
In some embodiments, the method further comprises:
Detecting whether the spliced image is complete;
And when the incomplete spliced image is detected, generating a prompt or searching the target space image corresponding to the target level from the first space image again according to the space range and/or the image characteristics corresponding to the target level.
In some embodiments, the slice images are 256×256 pixels in size, and each slice image includes its corresponding line number in its file name.
Another aspect of the embodiments of the present disclosure further provides an intelligent construction system for a spatial image service, including:
the condition acquisition module is used for acquiring input conditions, wherein the input conditions comprise a spatial range, time and a gland relation;
The first space image determining module is used for determining a first space image from an image library based on the input condition;
the hierarchy determining module is used for determining a hierarchy corresponding to each first space image according to the space range and/or the image characteristics of the first space images, wherein the image characteristics at least comprise resolution;
the target space image determining module is used for dividing the first space image according to the levels to obtain target space images corresponding to each level;
and the service address construction module is used for combining the target space images corresponding to each level, constructing a service address based on the labels corresponding to each level and obtaining a service logic data set of each level corresponding to the input condition.
Additional features will be set forth in part in the description which follows. As will become apparent to those skilled in the art upon review of the following and drawings, or may be learned by the production or operation of the examples. The features of the present specification can be implemented and obtained by practicing or using the various aspects of the methods, tools, and combinations set forth in the detailed examples below.
Drawings
The present specification will be further described by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. The embodiments are not limiting, in which like numerals represent like structures, wherein:
FIG. 1 is a schematic diagram of an exemplary application scenario of an intelligent building system for a spatial image service according to some embodiments of the present description;
FIG. 2 is an exemplary block diagram of an intelligent building system for a spatial image service according to some embodiments of the present description;
FIG. 3 is an exemplary flow chart of a method for intelligent construction of a spatial image service according to some embodiments of the present description;
fig. 4 is a flowchart of exemplary sub-steps of a method for intelligent construction of a spatial image service according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present specification, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is apparent that the drawings in the following description are only some examples or embodiments of the present specification, and it is possible for those of ordinary skill in the art to apply the present specification to other similar situations according to the drawings without inventive effort. Unless otherwise apparent from the context of the language or otherwise specified, like reference numerals in the figures refer to like structures or operations.
It should be appreciated that as used in this specification, a "system," "apparatus," "unit" and/or "module" is one method for distinguishing between different components, elements, parts, portions or assemblies at different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
As used in this specification and the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
A flowchart is used in this specification to describe the operations performed by the system according to embodiments of the present specification. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
The following describes in detail the method and system for intelligently constructing the spatial image service provided in the embodiments of the present disclosure with reference to the accompanying drawings.
Fig. 1 is a schematic view of an exemplary application scenario of an intelligent construction system for a spatial image service according to some embodiments of the present disclosure.
Referring to fig. 1, in some embodiments, an application scenario 100 of an intelligent building system of a spatial image service may include a terminal device 110, a storage device 120, a processing device 130, and a network 140. The various components in the application scenario 100 may be connected in a variety of ways. For example, terminal device 110 may be coupled to storage device 120 and/or processing device 130 via network 140, or may be coupled directly to storage device 120 and/or processing device 130. As another example, the storage device 120 may be directly connected to the processing device 130 or connected via a network 140.
The terminal device 110 may generate an input condition for screening the first spatial image in response to an operation of a worker, and the input condition may include a spatial range, time, and a capping relationship. The spatial range refers to a coverage area of a spatial image that a user wants to acquire, and may be a regular shape such as a rectangle, a polygon, a circle, an ellipse, or other irregular shapes. In some embodiments, the spatial range may be determined based on user input, which may include text input, selection input based on a user interface, or click input. For example, the user may input the text "beijing city" when he wants to acquire a spatial image of beijing city, and then use the administrative boundary coordinates corresponding to beijing city as the spatial range input by the user; for another example, in some embodiments, a plurality of place names or areas may be provided on the user interface for selection by the user, the user may select a target place name or target area on the user interface, and then take the boundary coordinates corresponding to the target place name or target area as the spatial range input by the user; for another example, in some embodiments, the user may draw a range of the spatial image to be acquired on the user interface by clicking or the like, and then calculate the spatial range of the user input from the range drawn by the user and the image displayed on the user interface. In some embodiments, the time in the aforementioned input condition may refer to the shooting time of the aerial image that the user wants to acquire, which may be a specific date or time period. For example, if the user wants to acquire a spatial image of month 8 of 2022, 2022-08-01 to 2022-08-31 may be input as time, or month 8 of 2022 may be input as time. The capping relationship refers to a relationship in which the spatial images specified by the user have overlapping portions, and for example, if the user wants to acquire a plurality of spatial images at different times, time priority and resolution priority may be specified, the time priority refers to an image with a time that is later than the time and is earlier than the time, and the resolution priority refers to a high-resolution image and a low-resolution image.
In some embodiments, terminal device 110 may include a mobile device 111, a tablet computer 112, a laptop computer 113, or the like, or any combination thereof. For example, mobile device 111 may comprise a mobile phone, a Personal Digital Assistant (PDA), a dedicated mobile terminal, or the like, or any combination thereof. In some embodiments, terminal device 110 may include input devices (e.g., keyboard, touch screen, microphone), output devices (e.g., display, speaker), etc.
Terminal device 110 may generate, receive, transmit, and/or display data. Wherein the generated data may include input conditions generated in response to user operations, and the received data may include data stored by the storage device 120, data processed by the processing device 130, and the like. The transmitted data may include input data (e.g., the aforementioned input conditions) of the user, instructions, and the like. For example, the terminal device 110 may transmit an operation instruction input by a user to the processing device 130 through the network 140, so as to control the processing device 130 to perform corresponding data processing.
In some embodiments, terminal device 110 may send input conditions, which it generates in response to user operations, to storage device 120, processing device 130, etc., over network 140. In some embodiments, the input conditions may be processed by the processing device 130. For example, the processing device 130 may determine the first spatial image from the image library stored in the storage device 120 based on the input condition, then determine a level corresponding to each first spatial image according to the foregoing spatial range and/or the image feature of the first spatial image, divide the first spatial image according to the level to obtain a target spatial image corresponding to each level, and finally combine the target spatial images corresponding to each level, and construct a service address based on the labels corresponding to each level to obtain the service logic dataset corresponding to each level of the input condition. The image library contains a large amount of space image data, and each space image is configured with a corresponding metadata table for recording related information of the space image, such as a coordinate system, coverage area, time and the like. The first spatial image refers to a spatial image which is queried from the image library according to the input condition and matches with the input condition, for example, the spatial image of a city queried from the image library in a certain time period according to the name of the city, the time period of the month and the option of the upper-lower relation of the gland, which are input by a user. The target space image refers to dividing the first space image according to the levels to obtain space images corresponding to each level, for example, dividing the first space image according to 0-18 levels to obtain target space images corresponding to 0-18 levels. In some embodiments, the aforementioned first aerial image and/or target aerial image may be sent to the storage device 120 for storage, or sent to the terminal device 110 for feedback to the user.
Storage 120 may store data, instructions, and/or any other information. In some embodiments, storage device 120 may store data obtained from terminal device 110 and/or processing device 130. For example, the storage device 120 may store input conditions generated by the terminal device 110 in response to user operations; for another example, the storage device 120 may store the first aerial image, the target aerial image, and the like processed by the processing device 130. In some embodiments, the storage device 120 may store data and/or instructions that the processing device 130 uses to perform or use to implement the exemplary methods described in this specification. In some embodiments, the storage device 120 may include mass memory, removable memory, volatile read-write memory, read-only memory (ROM), and the like, or any combination thereof. Exemplary mass storage devices may include magnetic disks, optical disks, solid state disks, and the like. In some embodiments, storage device 120 may be implemented on a cloud platform. For example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-layer cloud, or the like, or any combination thereof.
In some embodiments, the storage device 120 may be connected to the network 140 to communicate with at least one other component (e.g., the terminal device 110, the processing device 130) in the application scenario 100. At least one component in the application scenario 100 may access data, instructions, or other information stored in the storage device 120 through the network 140. In some embodiments, the storage device 120 may be directly connected or in communication with one or more components in the application scenario 100. In some embodiments, storage device 120 may be part of terminal device 110 and/or processing device 130.
The processing device 130 may process data and/or information obtained from the terminal device 110, the storage device 120, and/or other components of the application scenario 100. In some embodiments, the processing device 130 may obtain the input condition from any one or more of the terminal device 110 and the storage device 120, process the input condition to determine the first spatial image, then determine a level corresponding to each first spatial image according to a spatial range in the input condition and/or an image feature of the first spatial image, divide the first spatial image according to the level to obtain a target spatial image corresponding to each level, and finally, combine the target spatial images corresponding to each level, construct a service address based on a label corresponding to each level, and obtain a service logic data set corresponding to each level of the input condition. In some embodiments, the processing device 130 may obtain pre-stored computer instructions from the storage device 120 and execute the computer instructions to implement the method for intelligent construction of the aerial image service described herein. In some embodiments, processing device 130 may be part of terminal device 110. In some embodiments, the service logic data set processed by the processing device 130 may be stored in the storage device 120 to provide functional services such as access, query, display, analysis, and the like of the aerial image data.
In some embodiments, the processing device 130 may be a single server or a group of servers. The server farm may be centralized or distributed. In some embodiments, the processing device 130 may be local or remote. For example, processing device 130 may access information and/or data from terminal device 110 and/or storage device 120 via network 140. As another example, processing device 130 may be directly connected to terminal device 110 and/or storage device 120 to access information and/or data. In some embodiments, the processing device 130 may be implemented on a cloud platform. For example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, and the like, or any combination thereof.
The network 140 may facilitate the exchange of information and/or data. The network 140 may include any suitable network capable of facilitating the exchange of information and/or data of the application scenario 100. In some embodiments, at least one component of the application scenario 100 (e.g., the terminal device 110, the storage device 120, the processing device 130) may exchange information and/or data with at least one other component in the application scenario 100 via the network 140. For example, processing device 130 may obtain input conditions for user input from terminal device 110 and/or storage device 120 via network 140. For another example, the user may refer to the first spatial image determined based on the input condition and the target spatial image of each hierarchy obtained by dividing the first spatial image from the terminal device 110 through the network 140.
In some embodiments, network 140 may be any form of wired or wireless network, or any combination thereof. By way of example only, the network 140 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a ZigBee network, a Near Field Communication (NFC) network, and the like, or any combination thereof. In some embodiments, the network 140 may include at least one network access point through which at least one component of the application scenario 100 may connect to the network 140 to exchange data and/or information.
It should be noted that the above description about the application scenario 100 is only for illustration and description, and does not limit the application scope of the present specification. Various modifications and changes to the application scenario 100 may be made by those skilled in the art under the guidance of the present specification. However, such modifications and variations are still within the scope of the present description. For example, terminal device 110 may include more or fewer functional components.
Fig. 2 is a block diagram of an intelligent construction system for a space image service according to some embodiments of the present disclosure. In some embodiments, the intelligent construction system 200 of the spatial image service shown in fig. 2 may be applied to the application scenario 100 shown in fig. 1 in a software and/or hardware manner, for example, may be configured in a software and/or hardware manner to the processing device 130 and/or the terminal device 110, so as to process the input condition generated by the terminal device 110 in response to the user operation, to obtain a service logic data set of each level corresponding to the input condition.
Referring to fig. 2, in some embodiments, an intelligent construction system 200 for a aerial image service may include a condition acquisition module 210, a first aerial image determination module 220, a hierarchy determination module 230, a target aerial image determination module 240, and a service address construction module 250. Wherein:
The condition acquisition module 210 may be used to acquire input conditions including spatial extent, time, and gland relationship.
The first aerial image determination module 220 may be configured to determine the first aerial image from the image library based on the input condition.
The hierarchy determining module 230 may be configured to determine a hierarchy corresponding to each of the first spatial images according to the spatial range and/or image features of the first spatial images, where the image features include at least a resolution.
The target space image determining module 240 may be configured to divide the first space image according to the levels, so as to obtain a target space image corresponding to each level.
The service address construction module 250 may be configured to combine the target space images corresponding to each level, and construct a service address based on the labels corresponding to the levels, so as to obtain a service logic data set corresponding to the input condition for each level.
For further details regarding the above-mentioned respective modules, reference may be made to other locations in the present specification (e.g., fig. 3-4 and their associated descriptions), and no further description is provided herein.
It should be appreciated that the intelligent building system 200 for spatial image services and its modules shown in fig. 2 may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may then be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or special purpose design hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such as provided on a carrier medium such as a magnetic disk, CD or DVD-ROM, a programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system of the present specification and its modules may be implemented not only with hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also with software executed by various types of processors, for example, and with a combination of the above hardware circuits and software (e.g., firmware).
It should be noted that the above description of the intelligent building system 200 for aerial image services is provided for illustrative purposes only and is not intended to limit the scope of the present description. It will be appreciated by those skilled in the art from this disclosure that various modules may be combined arbitrarily or constituting a subsystem in connection with other modules without departing from this concept. For example, the condition acquisition module 210, the first aerial image determination module 220, the hierarchy determination module 230, the target aerial image determination module 240, and the service address construction module 250 described in fig. 2 may be different modules in one system, or may be one module to implement the functions of two or more modules. Such variations are within the scope of the present description. In some embodiments, the foregoing modules may be part of the processing device 130 and/or the terminal device 110.
Fig. 3 is an exemplary flow chart of a method for intelligent construction of a spatial image service according to some embodiments of the present description. In some embodiments, method 300 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (instructions run on a processing device to perform hardware simulation), or the like, or any combination thereof. In some embodiments, one or more operations in the flowchart of the intelligent construction method 300 of the aerial image service shown in fig. 3 may be implemented by the processing device 130 and/or the terminal device 110 shown in fig. 1. For example, method 300 may be stored in storage device 120 in the form of a computer program or instructions and invoked and/or executed by processing device 130 and/or terminal device 110. The execution of method 300 is described below using processing device 130 as an example.
Referring to fig. 3, in some embodiments, a method 300 of intelligent construction of a aerial image service may include:
In step 310, input conditions are obtained, including spatial extent, time, and gland relationship. In some embodiments, step 310 may be performed by the condition acquisition module 210.
In some embodiments, the input conditions generated by the terminal device 110 in response to the user operation may be stored in the storage device 120, and the condition acquisition module 210 may acquire the input conditions from the storage device 120. In the present description embodiment, the input conditions may include spatial extent, time, and gland relationship. The spatial range refers to a coverage area of a spatial image that a user wants to acquire, and may be a regular shape such as a rectangle, a polygon, a circle, an ellipse, or other irregular shapes. In some embodiments, the spatial range may be determined based on user input, which may include text input, selection input based on a user interface, or click input. For example, the user may input the text "beijing city" when he wants to acquire a spatial image of beijing city, and then use the administrative boundary coordinates corresponding to beijing city as the spatial range input by the user; for another example, in some embodiments, a plurality of place names or areas may be provided on the user interface for selection by the user, the user may select a target place name or target area on the user interface, and then take the boundary coordinates corresponding to the target place name or target area as the spatial range input by the user; for another example, in some embodiments, the user may draw a range of the spatial image to be acquired on the user interface by clicking or the like, and then calculate the spatial range of the user input from the range drawn by the user and the image displayed on the user interface.
In some embodiments, the time in the aforementioned input condition may refer to the shooting time of the aerial image that the user wants to acquire, which may be a specific date or time period. For example, if the user wants to acquire a spatial image of month 8 of 2022, 2022-08-01 to 2022-08-31 may be input as time, or month 8 of 2022 may be input as time. The capping relationship refers to a relationship in which the spatial images specified by the user have overlapping portions, and for example, if the user wants to acquire a plurality of spatial images at different times, time priority and resolution priority may be specified, the time priority refers to an image with a time that is later than the time and is earlier than the time, and the resolution priority refers to a high-resolution image and a low-resolution image.
In some embodiments, the condition acquisition module 210 may be communicatively coupled to the terminal device 110, and the condition acquisition module 210 may acquire the aforementioned input conditions directly from the terminal device 110.
Step 320, determining a first aerial image from an image library based on the input condition. In some embodiments, step 320 may be performed by the first aerial image determination module 220.
In the embodiment of the present disclosure, the image library refers to a database for storing spatial images, and the spatial images in the image library may be obtained by various manners, for example, may be acquired by satellite remote sensing, aerial remote sensing, ground remote sensing, or the like. The satellite remote sensing is to observe and shoot the earth surface or other targets from the orbit by using a remote sensing sensor carried by an artificial satellite to acquire space image data; the aerial remote sensing refers to the observation and shooting of the ground surface or other targets from the air by using a remote sensing sensor carried by a carrier such as an airplane, an unmanned aerial vehicle or a balloon, and the like, so as to obtain space image data; ground remote sensing refers to a mode of observing and shooting ground surfaces or other targets from the ground or near ground by using ground equipment or a manually operated remote sensing sensor, such as a ground remote sensing system of ground photography, ground radar, ground laser radar and the like.
It should be noted that, the spatial image is a raster file with spatial information, which belongs to unstructured data, and internal information of the file cannot be directly operated according to spatial relation. In the embodiment of the present disclosure, in order to enable the image library to have the capability of operating spatial image data based on SQL (Structured Query Language ), a metadata table corresponding to each spatial image in the image library may be configured, so that unstructured spatial image data is queried according to arbitrary spatial and attribute conditions based on the metadata table, and further, the subsequent layer service logic data set is intelligently constructed.
Specifically, in some embodiments, the metadata corresponding to each spatial image in the metadata table may include at least a coordinate system, a coverage area, and a time. The coordinate system is a mathematical model for representing the position and direction of each pixel in the aerial image, and is generally composed of a reference ellipsoid, a reference plane and a projection method. In some embodiments, the coordinate system may include a geographic coordinate system that may represent a spatial location in terms of longitude and latitude, and a projection coordinate system that may represent a spatial location using planar coordinates. Coverage refers to the entire geographic area covered by each aerial image, which may be a regular shape or an irregular shape. In some embodiments, the coverage may be used to determine spatial relationships with other aerial images. The time in the metadata refers to the time of capturing the aerial image, which can be used to determine the temporal and dynamic nature of the aerial image, as well as the temporal relationship with other aerial images.
In some embodiments, the coverage vector surface corresponding to each spatial image in the image library may be determined based on an image pyramid. It can be understood that the image pyramid is a simplified set of resolution images of the image, a series of image layers with different resolutions can be established through an image resampling method, and each image layer can be combined and/or stored respectively, and a corresponding spatial index mechanism is established, so that the display speed when browsing the image is improved. In some embodiments, an image pyramid may be created using some tool or library, and then a corresponding vector surface is generated for each layer. Illustratively, in some embodiments, arcGIS software may be used to create an image pyramid for the Raster dataset using the Batch Build Pyramids tool, and then a corresponding vector surface for each layer using the master Domain tool. For another example, in some embodiments, a GDAL library may be used, with the gdaladdo command, to create an image pyramid for the raster dataset, and then with the gdal _polygonize. Py command, to generate a corresponding vector surface for each layer.
In the embodiment of the present disclosure, by configuring a metadata table corresponding to each spatial image in the image library, and combining the data structure of the image, the spatial image may have the condition of SQL operation, so as to implement quick query and reading of image information in any spatial region and any image attribute condition.
Further, after the configuration is completed for the image library, the first aerial image determining module 220 may query the metadata table corresponding to each aerial image in the image library for the aerial image matched with the input condition based on the spatial range, time and capping relation information contained in the input condition, and take the aerial image matched with the input condition as the first aerial image for subsequent processing.
Step 330, determining a hierarchy corresponding to each first spatial image according to the spatial range and/or the image features of the first spatial image, where the image features at least include resolution. In some embodiments, step 330 may be performed by hierarchy determination module 230.
In this embodiment of the present disclosure, through step 320, the first spatial images with different spatial ranges (less than or equal to the spatial ranges in the input conditions) and different resolutions that match the input conditions input by the user may be selected from the image library. Further, the hierarchy determining module 230 may determine the hierarchy corresponding to each of the first spatial images according to the spatial range and/or the image features of the first spatial images, where the image features of the first spatial images may include at least a resolution, and the resolution may refer to an actual ground distance represented by each pixel in the spatial images, and the higher the spatial resolution, the clearer the spatial detail.
Specifically, in some embodiments, the metadata table may further include a resolution corresponding to each spatial image, and the hierarchy determining module 230 may determine the service hierarchy corresponding to each first spatial image based on the resolution corresponding to each first spatial image. In this embodiment of the present disclosure, the service level corresponding to the spatial image refers to one or more levels of layers obtained by dividing the spatial image according to different resolutions, each level corresponds to a scale, the range of the computer display window may reduce the actual coverage area with the increase of the level, the resolution increases with the increase of the level, in other words, the display range of the computer display window is positively and negatively related to the actual coverage area corresponding to the corresponding level, and the resolution of the first spatial image is positively related to the display range of the computer display window compared to the corresponding level. For example, an image displayed on a computer display window may be classified into 0-18 levels, wherein 0 is the lowest resolution, which may cover a larger area (e.g., the whole city, the whole province, the whole country, or the whole world), a first spatial image with a lower resolution may be used, and 18 levels are the highest resolution, which only covers a partial area, a first spatial image with a higher resolution may be used.
In an embodiment of the present disclosure, the hierarchy determining module 230 may determine the display rule of the first spatial images that overlap each other using the capping relationship determined by the user. For example, when only the time priority is selected, the image with the later time overlaps the image with the earlier time, even if the latter resolution is superior to the former; when only the resolution priority is selected, the high resolution image is overlaid with the low resolution image, even if the latter is later.
And step 340, dividing the first space image according to the levels to obtain a target space image corresponding to each level and a display rule when the target space images overlap. In some embodiments, step 340 may be performed by the target space image determination module 240.
In this embodiment of the present disclosure, the target aerial image determining module 240 may be configured to divide the first aerial images with the same hierarchy together, so as to obtain a target aerial image corresponding to each hierarchy. In this specification, the target aerial image refers to aerial images having the same or similar resolution and coverage that intersect or are within the aerial image service coverage, which can be used to form an aerial image layer for subsequent aerial image processing and service.
And 350, combining the target space images corresponding to each level and the display rules when the target space images are overlapped, and constructing a service address based on the labels corresponding to each level to obtain a service logic data set of each level corresponding to the input condition. In some embodiments, step 350 may be performed by service address construction module 250.
In the embodiment of the present disclosure, the labels corresponding to the respective levels refer to some information for identifying the target space image, such as a service ID, a level number, a time, and the like. The service address refers to a network address for accessing the target space image, and may be composed of a storage path of the target space image corresponding to each hierarchy and a tag corresponding to each hierarchy. The service logic data set refers to a data set formed according to a service address of each hierarchy corresponding to an input condition, and can be used for providing a space image service to a user.
Fig. 4 is a flowchart of exemplary sub-steps of a method for intelligent construction of a spatial image service according to some embodiments of the present description. Referring to fig. 4, in some embodiments, step 350 may include the sub-steps of:
and a sub-step 351 of stitching the target space images according to the image characteristics and/or the spatial relationship of the target space images for each target level, and resampling the stitched images to obtain resampled images.
In some embodiments, for the target space images of the same hierarchy, the target space images can be spliced according to image characteristics and/or spatial relations. Specifically, in some embodiments, multiple target spatial images of the same hierarchy may be stitched based on image fusion algorithms, including fusion algorithms based on band selection, fusion algorithms based on wavelet transforms, fusion algorithms based on principal component analysis, or fusion algorithms based on deep learning. The image fusion algorithm can be used for fusion or splicing according to the image features of the target space of the same level, so that a spliced image is obtained. For example, for different target space images with intersecting regions, any one or a combination of the above image fusion algorithms may be used to perform fusion processing, so as to achieve image stitching.
Specifically, the fusion algorithm based on band selection can extract useful features in the target space images by selecting information of different bands, then select the band which can reflect the target information most for fusion and splicing, for example, the intersection region can be found according to the band distribution features of different target space images of the same level, and then the fusion and splicing of different target space images of the same level can be realized by de-duplicating the intersection region. The fusion algorithm based on wavelet transformation can utilize wavelet transformation to decompose the target space image into different frequency components, then select or combine each frequency component according to a certain rule, and then reconstruct the fused image (i.e. the spliced image) by utilizing inverse wavelet transformation. The fusion algorithm based on principal component analysis can convert the target space image into a group of principal components which are not related to each other by using principal component analysis, then select or combine each principal component according to a certain rule, and finally reconstruct a fused image (i.e. a spliced image) by using inverse principal component analysis. The fusion algorithm based on deep learning can utilize a deep neural network to perform feature extraction and feature fusion on the target space images corresponding to the same level, and then utilize an up-sampling or reconstruction module to generate fused images (i.e. spliced images).
In some embodiments, the images of the target space of the same hierarchy may be spliced according to a coordinate system corresponding to each image, where the coordinate system may reflect a spatial relationship between the images of different target spaces. For example, the stitching of the target space images may be achieved by aligning pixels having the same coordinates (e.g., the same longitude and latitude coordinates) in different target space images.
In some embodiments, after the target space images of the same level are spliced, whether the spliced images are complete or not may be detected, and when the spliced images are detected to be incomplete, a prompt is generated to remind relevant staff to supplement (for example, complement) or search the target space images corresponding to the target level from the first space images again according to the space range and/or the image features corresponding to the target level.
Specifically, in some embodiments, whether the image is complete or not may be determined by the integrity of the spliced image or the missing pixel. If not, repeating the steps, and searching the target space image corresponding to the target level from the first space image again according to the space range and/or the image characteristics corresponding to the target level. For example, in some embodiments, the target space image corresponding to the target level may be searched for again from the first space image according to the coordinate range corresponding to the missing region. For another example, in some embodiments, the corresponding target aerial image may be searched for from the first aerial image according to the image features of other target aerial images adjacent to the missing region.
Further, after the spliced image is obtained, the spliced image may be resampled to obtain a resampled image. Specifically, in the embodiment of the present disclosure, the resampled image refers to a process of generating pixel information of one image according to pixel information of another image by a certain algorithm, so as to change the spatial resolution, projection or range of the image to adapt to application requirements of different levels.
And a substep 352, slicing the resampled image, and storing the sliced image obtained after slicing in a target file directory.
After the resampled image is obtained through the above steps, the resampled image may be sliced. In the embodiment of the present disclosure, the slicing process is used to divide the large-format resampled image into a plurality of small-format slice images, so as to facilitate network transmission, display and subsequent application. In embodiments of the present disclosure, the resampled image may be sliced using a slicing tool, which may include GDAL (Geospatial Data Abstraction Library), arcGIS, QGIS, mapTiler, and the like. In some embodiments, the slice image may be sized as desired, such as 256×256 pixels, 512×512 pixels, and so on. The format of the slice images may be dependent on different service platforms, such as PNG, JPG, TIFF. By way of example only, in some embodiments of the present description, the slice image may be 256×256 pixels in size, and each of the slice images may include its corresponding row and column number in its file name, which may be used to reflect its position in the resampled image prior to slicing.
In some embodiments, the sliced image obtained after slicing may be stored in a target file directory. The target file directory may be understood as a folder for storing slice images, which may create different subfolders according to different levels and times for easy management and retrieval.
In the substep 353, the level number and time corresponding to the target level are used as the label of the target file directory.
After the slice image obtained after the processing of the target space image corresponding to the target level is stored in the corresponding target file directory, the level number and time corresponding to the target level can be used as the label of the target file directory, so that metadata information is added to the target file directory, and the subsequent service address construction and inquiry are facilitated. In this specification, the level number may be used to represent a number or symbol corresponding to a different level, such as 0, 1, 2, etc. The time may be used to represent the time of capture or update of the target space image, such as 2023-01-01, 2023-08-01, etc. In some embodiments, the aforementioned tags may be stored in the target file directory in the form of text files or databases, or represented by folder names or portions of file names.
In step 354, a service address corresponding to the target hierarchy is constructed according to the storage path of the target file directory and the tag.
In the embodiment of the present disclosure, by constructing the service address corresponding to each level, the slice image in the target file directory corresponding to each level may be converted into an accessible network address, so as to provide subsequent spatial image processing and services. In this specification, the service address may be according to different service protocols and formats, such as WMTS, TMS, etc. In some embodiments, the service address may splice the storage path and the tag of the target file directory into a character string according to a certain rule or a certain template, or convert the storage path and the tag of the target file directory into a number or a symbol according to a certain algorithm or a code, or the like.
For example, assuming that the storage path of the target file directory is D:/files/data, the tag corresponding to the target file directory is 0_2022-01-01 (representing level 0, time 2022-01-01), and the slice image is PNG in format, the service address may be represented as: http:// localhost: 8080/files/data/0_2022-01/{ x } - { y }. Png, wherein localhost:8080/files/data is a storage path of the target file directory (which may be set according to actual conditions or user requirements by way of example only), and { x } and { y } represent row numbers and column numbers corresponding to slice images. In some embodiments, the service address may also be expressed as: http:// localhost 8080/files/data/0_2020-01-01/{ id }, where { id } represents the unique identifier corresponding to the slice image. In some embodiments, a row and column number of the slice image in its corresponding resampled image may be included in the unique identifier.
Through the above sub-steps 351 to 354, the target space images corresponding to different levels can be processed to obtain the service address corresponding to each level. For implementation steps of other levels, reference may be made to the above sub-steps 351 to 354, which are not described in detail in this specification.
As can be seen from the above, through the steps 310 to 350, the input conditions input by the user (e.g., the staff of the spatial image service provider) can be processed, and the service logic data set of each level corresponding to the input conditions is generated, so that the construction and release efficiency of the service logic data is greatly improved, and a foundation is laid for the dynamic application of the spatial image data. For example, for a provincial space image service of tens of thousands square kilometers, from construction to release of a service logic data set, more than ten people are originally required to take a month, and the intelligent construction method of the space image service provided by the embodiment of the specification can be completed within a few minutes. Therefore, by the intelligent construction method of the space image service provided by the embodiment of the specification, the construction of the space image service data in a large range and multiple layers can be realized rapidly.
In addition, in the intelligent construction method of the space image service provided by the embodiment of the specification, the works of space image data hierarchical screening, stacking sequence editing, hierarchical slicing, service data directory system construction, data copying and the like are automatically completed by adopting a computer technology, and errors and quality problems caused by manual screening and processing of a large amount of service data can be avoided, so that the accuracy and reliability of the space image service data are improved.
In addition, it should be noted that, because the intelligent construction method of the space image service provided by the embodiment of the present disclosure can greatly improve the construction efficiency of the logic data of the space image service, the aerospace remote sensing data can also play a role in the work with high aging requirements such as emergency event handling, large-scale facility equipment inspection, natural environment monitoring, and the like.
In summary, the possible benefits of the embodiments of the present disclosure include, but are not limited to: (1) In the intelligent construction method and system for the space image service provided by some embodiments of the present disclosure, the input condition input by the user is processed by a computer technology, and the service logic data set of each level corresponding to the input condition is automatically generated, so that the construction and release efficiency of the service logic data can be greatly improved, a foundation is laid for the dynamic application of the space image data, and the aerospace remote sensing data can play a role in the work with high aging requirements such as emergency event handling, large-scale facility equipment inspection, natural environment monitoring, and the like; (2) In the intelligent construction method and system for the space image service provided by some embodiments of the present disclosure, by automatically completing the works of space image data hierarchical screening, stacking sequence editing, hierarchical slicing (or slicing-free), service data directory system construction, data copying, etc. by adopting a computer technology, errors and quality problems caused when a large amount of service data is manually screened and processed can be avoided, thereby improving the accuracy and reliability of the space image service data; (3) In the method and system for intelligently constructing the space image service provided in some embodiments of the present disclosure, by configuring a metadata table corresponding to each space image in the image library, the space image stored in the image library can have the condition of SQL operation, so as to implement quick query and reading of image information of any space region and any image attribute condition.
It should be noted that, the benefits that may be generated by different embodiments may be different, and in different embodiments, the benefits that may be generated may be any one or a combination of several of the above, or any other benefits that may be obtained.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements, and adaptations to the present disclosure may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within this specification, and therefore, such modifications, improvements, and modifications are intended to be included within the spirit and scope of the exemplary embodiments of the present invention.
Meanwhile, the specification uses specific words to describe the embodiments of the specification. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the present description. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present description may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the specification can be illustrated and described in terms of several patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the present description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the specification may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
The computer storage medium may contain a propagated data signal with the computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take on a variety of forms, including electro-magnetic, optical, etc., or any suitable combination thereof. A computer storage medium may be any computer readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated through any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or a combination of any of the foregoing.
The computer program code necessary for operation of portions of the present description may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C ++, c#, vb net, python, and the like, a conventional programming language such as C language, visual Basic, fortran2003, perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer or as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any form of network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or the use of services such as software as a service (SaaS) in a cloud computing environment.
Furthermore, the order in which the elements and sequences are processed, the use of numerical letters, or other designations in the description are not intended to limit the order in which the processes and methods of the description are performed unless explicitly recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of various examples, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the present disclosure. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing processing device or mobile device.
Likewise, it should be noted that in order to simplify the presentation disclosed in this specification and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure does not imply that the subject matter of the present description requires more features than are set forth in the claims. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the number allows for a 20% variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations that may be employed in some embodiments to confirm the breadth of the range, in particular embodiments, the setting of such numerical values is as precise as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., referred to in this specification is incorporated herein by reference in its entirety. Except for application history documents that are inconsistent or conflicting with the content of this specification, documents that are currently or later attached to this specification in which the broadest scope of the claims to this specification is limited are also. It is noted that, if the description, definition, and/or use of a term in an attached material in this specification does not conform to or conflict with what is described in this specification, the description, definition, and/or use of the term in this specification controls.
Finally, it should be understood that the embodiments described in this specification are merely illustrative of the principles of the embodiments of this specification. Other variations are possible within the scope of this description. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present specification may be considered as consistent with the teachings of the present specification. Accordingly, the embodiments of the present specification are not limited to only the embodiments explicitly described and depicted in the present specification.
Claims (10)
1. An intelligent construction method of a space image service is characterized by comprising the following steps:
acquiring input conditions, wherein the input conditions comprise a spatial range, time and a gland relation;
Determining a first spatial image from an image library based on the input condition;
Determining a hierarchy corresponding to each first spatial image according to the spatial range and/or the image characteristics of the first spatial image, wherein the image characteristics at least comprise resolution;
Dividing the first space image according to the levels to obtain a target space image corresponding to each level;
and combining the target space images corresponding to each level, and constructing a service address based on the labels corresponding to each level to obtain a service logic data set of each level corresponding to the input condition.
2. The method of claim 1, wherein the spatial range is determined based on user input, wherein the user input comprises text input, selection input based on a user interface, or click input.
3. The method of claim 1, wherein each spatial image in the image library is configured with a corresponding metadata table, and metadata corresponding to each spatial image in the metadata table at least comprises a coordinate system, a coverage area and a time; wherein the coverage is determined based on an image pyramid;
the determining a first spatial image from an image library based on the input condition comprises: and inquiring the space image matched with the input condition in the metadata table, and taking the space image matched with the input condition as the first space image.
4. A method as claimed in claim 3, wherein the level corresponding to the first aerial image is positively correlated with the coverage corresponding to the first aerial image and negatively correlated with the resolution of the first aerial image.
5. The method of claim 3, wherein combining the target space images corresponding to each hierarchy and constructing the service address based on the labels corresponding to each hierarchy comprises:
For each of the target levels,
Splicing the target space images according to the image characteristics and/or the spatial relation of the target space images, and resampling the spliced images to obtain resampled images;
Slicing the resampled image, and storing a sliced image obtained after slicing in a target file directory;
Taking the level number and time corresponding to the target level as the label of the target file directory;
and constructing a service address corresponding to the target level according to the storage path of the target file directory and the label.
6. The method of claim 5, wherein stitching the target aerial image according to the image features and/or spatial relationships of the target aerial image comprises:
And splicing the target space images according to the coordinate system corresponding to the target space images, and/or splicing the target space images based on an image fusion algorithm.
7. The method of claim 6, wherein the image fusion algorithm comprises: a fusion algorithm based on band selection, a fusion algorithm based on wavelet transformation, a fusion algorithm based on principal component analysis or a fusion algorithm based on deep learning.
8. The method of claim 7, wherein the method further comprises:
Detecting whether the spliced image is complete;
And when the incomplete spliced image is detected, generating a prompt or searching the target space image corresponding to the target level from the first space image again according to the space range and/or the image characteristics corresponding to the target level.
9. The method of claim 5, wherein the slice images are 256 x 256 pixels in size, and each slice image includes its corresponding rank number in its filename.
10. An intelligent construction system for a space image service, comprising:
the condition acquisition module is used for acquiring input conditions, wherein the input conditions comprise a spatial range, time and a gland relation;
The first space image determining module is used for determining a first space image from an image library based on the input condition;
the hierarchy determining module is used for determining a hierarchy corresponding to each first space image according to the space range and/or the image characteristics of the first space images, wherein the image characteristics at least comprise resolution;
the target space image determining module is used for dividing the first space image according to the levels to obtain target space images corresponding to each level;
and the service address construction module is used for combining the target space images corresponding to each level, constructing a service address based on the labels corresponding to each level and obtaining a service logic data set of each level corresponding to the input condition.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410326876.9A CN118152599A (en) | 2024-03-21 | 2024-03-21 | Intelligent construction method and system for space image service |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410326876.9A CN118152599A (en) | 2024-03-21 | 2024-03-21 | Intelligent construction method and system for space image service |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118152599A true CN118152599A (en) | 2024-06-07 |
Family
ID=91300369
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410326876.9A Pending CN118152599A (en) | 2024-03-21 | 2024-03-21 | Intelligent construction method and system for space image service |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118152599A (en) |
-
2024
- 2024-03-21 CN CN202410326876.9A patent/CN118152599A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20080074423A1 (en) | Method and System for Displaying Graphical Objects on a Digital Map | |
US20100332468A1 (en) | Spatial search engine support of virtual earth visualization system | |
US20190361847A1 (en) | Spatial Linking Visual Navigation System and Method of Using the Same | |
Lawhead | Learning geospatial analysis with Python | |
CN117931810B (en) | Structured management method and system for spatial image data | |
CN115617889A (en) | GIS-based survey data acquisition and processing method and system | |
CN109688223B (en) | Ecological environment data resource sharing method and device | |
Lawhead | Learning geospatial analysis with Python | |
Choudhury et al. | A web-based land management system for Bangladesh | |
Bajauri et al. | Developing a geodatabase for efficient UAV-based automatic container crane inspection | |
KR102097592B1 (en) | Method for providing sentinel satellite imagery download service | |
Docan | Learning ArcGIS for desktop | |
Giuliani et al. | Bringing GEOSS services into practice | |
CN118152599A (en) | Intelligent construction method and system for space image service | |
CN105260389A (en) | Unmanned aerial vehicle reconnaissance image data management and visual display method | |
CN110942029B (en) | Ground object detection Mask R-CNN model training method based on GIS technology and spatial data | |
Prescott et al. | Investigating Application of LiDAR for Nuclear Power Plants | |
CN118467620B (en) | Method and system for processing application by simulating data of task execution environment | |
CN109377479A (en) | Satellite dish object detection method based on remote sensing image | |
CN118170856B (en) | Data pool construction method and system suitable for modeling of artificial intelligent geological map | |
Lu et al. | An efficient annotation method for big data sets of high-resolution earth observation images | |
Gao et al. | Development of broad area target search system based on deep learning | |
US12131279B1 (en) | System and method for unexpected event preparation | |
McAvoy et al. | OPENHERITAGE3D: Building AN Open Visual Archive for Site Scale Giga-Resolution LIDAR and Photogrammetry Data | |
Li et al. | Utilizing Chinese high-resolution satellite images for inspection of unauthorized constructions in Beijing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |