CN112287756A - Ground object identification method, device, storage medium and terminal - Google Patents

Ground object identification method, device, storage medium and terminal Download PDF

Info

Publication number
CN112287756A
CN112287756A CN202011023367.7A CN202011023367A CN112287756A CN 112287756 A CN112287756 A CN 112287756A CN 202011023367 A CN202011023367 A CN 202011023367A CN 112287756 A CN112287756 A CN 112287756A
Authority
CN
China
Prior art keywords
image
sample
remote sensing
images
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011023367.7A
Other languages
Chinese (zh)
Inventor
焦士琦
宋宽
姜文聪
徐鲁冰
魏泽强
胡畔
谭文轩
彭珺
高强
李苗苗
吴江
张甲伟
苏少男
张艳忠
杜腾腾
简敏
庄莹
徐春萌
彭欣
张弓
顾竹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiage Tiandi Technology Co ltd
Original Assignee
Beijing Jiage Tiandi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiage Tiandi Technology Co ltd filed Critical Beijing Jiage Tiandi Technology Co ltd
Priority to CN202011023367.7A priority Critical patent/CN112287756A/en
Publication of CN112287756A publication Critical patent/CN112287756A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a ground feature identification method, a ground feature identification device, a storage medium and a terminal, and belongs to the technical field of computers. The method is applied to a terminal, the terminal performs fusion processing on a first remote sensing image and a second remote sensing image based on a Principal Component Analysis (PCA) algorithm to obtain a fusion image, the first remote sensing image and the second remote sensing image are remote sensing images of the same geographical area at different moments, the first remote sensing image and the second remote sensing image contain different ground objects, the fusion image is processed based on a preset newly-added ground object identification model to obtain an identification result of the newly-added ground object, automatic identification of the newly-added ground object can be achieved, labor cost is reduced, and efficiency of identifying the newly-added ground object is effectively improved.

Description

Ground object identification method, device, storage medium and terminal
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for recognizing a feature, a storage medium, and a terminal.
Background
With the development of remote sensing technology, the use of remote sensing images for ground feature identification gradually becomes an important application field in remote sensing technology, such as: the method is used for identifying ground objects such as houses, greenhouses and oil fields, and can also relate to the identification of newly added ground objects in a specific scene, such as: and identifying the newly added oil field. In the related art, the newly added ground objects of the remote sensing image are identified, the remote sensing image is generally marked by the newly added ground objects pixel by pixel in a manual marking mode, the labor cost is high, the processing period is long, and the remote sensing image is easily affected by weather, so that the image data is not accurate, and the newly added ground objects cannot be accurately marked.
Disclosure of Invention
The embodiment of the application provides a ground object identification method, a device, a storage medium and a terminal, and can solve the problem that the labor cost for marking newly added ground objects in the related art is high. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a ground object identification method, where the method includes:
performing fusion processing on the first remote sensing image and the second remote sensing image based on a Principal Component Analysis (PCA) algorithm to obtain a fused image; the first remote sensing image and the second remote sensing image are remote sensing images of the same geographical area at different moments;
and processing the fusion image based on a preset newly added ground object identification model to obtain an identification result of the newly added ground object.
In a second aspect, an embodiment of the present application provides a ground object recognition apparatus, including:
the first processing module is used for carrying out fusion processing on the first remote sensing image and the second remote sensing image based on a Principal Component Analysis (PCA) algorithm to obtain a fused image; the first remote sensing image and the second remote sensing image are remote sensing images of the same geographical area at different moments;
and the second processing module is used for processing the fusion image based on a preset newly added ground object identification model to obtain an identification result of the newly added ground object.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides a terminal, including: the system comprises a processor, a memory and a display screen; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
when the scheme of the embodiment of the application is executed, the terminal carries out fusion processing on the first remote sensing image and the second remote sensing image based on a Principal Component Analysis (PCA) algorithm to obtain a fusion image, the first remote sensing image and the second remote sensing image are remote sensing images of the same geographical area at different moments, the first remote sensing image and the second remote sensing image are different in ground feature, the fusion image is processed based on a preset newly added ground feature identification model to obtain an identification result of the newly added ground feature, automatic identification of the newly added ground feature can be achieved, labor cost is reduced, and efficiency of identifying the newly added ground feature is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic system architecture diagram of a method for recognizing a feature provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for recognizing a ground object according to an embodiment of the present application;
fig. 3 is another schematic flow chart of a method for recognizing a ground object according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a Cascade R-CNN model provided in an embodiment of the present application;
fig. 5 is another schematic flow chart of a method for recognizing a ground object according to an embodiment of the present application;
fig. 6 is a schematic view of an identification effect of a new feature provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a ground recognition device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
In order to make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a schematic diagram of a system architecture for ground object identification provided in an embodiment of the present application is shown, including a remote sensing satellite 101, a remote sensing satellite ground station 102, and a terminal device 103, and a target object 104 of a remote sensing image taken by the remote sensing satellite 101, where the remote sensing satellite 101 takes a remote sensing image of the target object 104 and transmits the taken remote sensing image to the terminal device 103 through the remote sensing satellite ground station 102.
Generally, staff of the ground station 102 of the remote sensing satellite can complete allocation of all resources according to task requirements, determine the time of different satellites passing through 5 ground stations 102 of the remote sensing satellite by calculating satellite orbits, and arrange resources such as antennas, records and transmissions of the ground station 102 of the remote sensing satellite according to scientific tasks. When the remote sensing satellite 101 passes over the remote sensing satellite ground station 102, the remote sensing satellite 102 converts the acquired data into electromagnetic waves suitable for being transmitted in free space and sends the electromagnetic waves to the remote sensing satellite ground station. The remote sensing satellite ground station 102 aims the antenna at the position where the remote sensing satellite 101 is about to appear, and when the remote sensing satellite 101 appears and starts to send electromagnetic wave signals, the antenna of the corresponding remote sensing satellite ground station continuously rotates, so that the electromagnetic waves are locked and tracked in the whole process. Meanwhile, the remote sensing satellite ground station 102 performs amplification, frequency conversion, demodulation and other processing on the received electromagnetic wave signals, and sends the output satellite original data baseband signals to the terminal device 103, and the terminal device 103 further extracts and analyzes the received data.
The remote sensing satellite 101 is an artificial satellite used as an outer space remote sensing platform, and a remote sensing technology using a satellite as a platform is called satellite remote sensing. Typically, the remote sensing satellite 101 may operate on orbit for years. The satellite orbit can be determined as desired. The remote sensing satellite 101 can cover the entire earth or any designated area for a given time and can continuously remotely sense a designated area of the earth's surface while orbiting in a geosynchronous manner. All remote sensing satellites need to be provided with a remote sensing satellite ground station, satellite data obtained from a remote sensing market platform can monitor the conditions of agriculture, forestry, oceans, homeland, environmental protection, meteorology and the like, and the remote sensing satellites mainly comprise three types of meteorological satellites, land satellites and ocean satellites.
The remote sensing satellite ground station 102 is also called a satellite ground receiving station, and is mainly used for capturing and tracking a satellite, receiving, demodulating and recording satellite remote sensing data and auxiliary data, monitoring and judging the working conditions of a satellite remote sensor and a transmission system thereof in real time through a quick vision system, and evaluating the image quality. In the antenna tracking and receiving range, the ground receiving station of the remote sensing satellite can directly receive the remote sensing data collected by the satellite and transmitted in real time. The remote sensing satellite ground station 102 mainly comprises a parabolic antenna, a feed source, a tuner and a satellite receiving card (satellite receiver). A parabolic antenna for reflecting and converging the satellite signal energy from the air to a point (focus); the system comprises a feed source, a tuner (LNB) and a satellite receiver, wherein a loudspeaker for collecting satellite signals is arranged at the focus of a parabolic antenna and is a source for feeding energy, the energy converged to the focus can be collected completely, and the tuner (LNB) is used for carrying out frequency reduction and signal amplification on the satellite signals sent by the feed source and then transmitting the satellite signals to the satellite receiver; the satellite receiving card (satellite receiver) demodulates the satellite signal from the tuner to demodulate the data signal or satellite TV image signal and audio signal.
The terminal device 103 may be various electronic devices having a display screen, including but not limited to a tablet computer, a portable computer, a desktop computer, and the like. The terminal device 103 is internally provided with a preset newly added feature identification model, and the terminal device 103 can identify newly added features of the obtained remote sensing images to realize identification of the newly added features of the remote sensing images in the same geographic area at different moments. The terminal device 103 may be hardware or software. When the terminal device 103 is software, it may be installed in the electronic devices listed above, and may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, and is not limited specifically herein. When the terminal device 103 is hardware, a display device may be further installed thereon, and the display of the display device may be various devices capable of implementing a display function, such as: the display device may be a Cathode ray tube (CR) display, a Light-emitting diode (LED) display, an electronic ink screen, a Liquid Crystal Display (LCD), a Plasma Display Panel (PDP), or the like.
It should be understood that the number of telemetry satellites, telemetry satellite ground stations, and terminal devices in FIG. 1 is illustrative only. According to the practical needs, the remote sensing satellite can be any number of remote sensing satellites, remote sensing satellite ground stations and terminal equipment.
In the following method embodiments, for convenience of description, only the main execution body of each step is described as a terminal.
The feature identification method provided by the embodiment of the present application will be described in detail below with reference to fig. 2 to 5.
Referring to fig. 2, a flow chart of a ground object identification method is provided in the embodiment of the present application. The present embodiment is exemplified by applying a feature identification method to a terminal, and the feature identification method may include the following steps:
s201, carrying out fusion processing on the first remote sensing image and the second remote sensing image based on a Principal Component Analysis (PCA) algorithm to obtain a fusion image.
The remote sensing image is a film or a photo for recording the size of electromagnetic waves of various ground objects, and is mainly divided into an aerial photo and a satellite photo, and the remote sensing image processed by a computer is a digital image obtained by analog/digital (A/D) conversion; the first remote sensing image and the second remote sensing image are remote sensing images of the same geographical area at different moments, and the first remote sensing image and the second remote sensing image can be two remote sensing images containing the same ground object or two remote sensing images containing different ground objects (for example, the second remote sensing image contains a new ground object compared with the first remote sensing image). The fused image is an image obtained by processing the first remote sensing image and the second remote sensing image through a PCA algorithm, and comprises the information content of the first remote sensing image and the second remote sensing image. Principal Component Analysis (PCA) is a statistical method, in which a set of variables that may have correlation is converted into a set of linearly uncorrelated variables through orthogonal transformation by a mathematical dimension reduction method, and the converted variables can represent the information content of the original variables, and the set of converted variables is called Principal components.
Generally, a terminal acquires two remote sensing images, namely a first remote sensing image and a second remote sensing image, of the same geographic area at different moments, wherein the two remote sensing images need to identify a newly added ground object. The first remote sensing image is cut to obtain a plurality of first small images, and the second remote sensing image is cut to obtain a plurality of second small images. Respectively preprocessing the plurality of first small block images and the plurality of second small block images; wherein, the first small block image corresponds to the second small block image one by one, and the preprocessing comprises at least one of the following items: stretching, brightness adjustment and color tone adjustment. And performing fusion processing on the plurality of first small block images and the plurality of second small block images based on a PCA algorithm to obtain a plurality of fusion images, wherein the number of the first small block images is consistent with that of the second small block images, and the number of the fusion images is also consistent with that of the first small block images and that of the second small block images.
In addition, the terminal includes, but is not limited to, processing two remote sensing images of the same geographic area at different times, and may also process a plurality of remote sensing images of the same geographic area at different times. For example, the terminal may process the remote sensing images at three different times t1, t2 and t3, and the remote sensing image at time t2 includes the new feature compared with the remote sensing image at time t1, and the remote sensing image at time t3 includes the new feature compared with the remote sensing image at time t1, and the terminal may output the new feature position information of the remote sensing image at time t2 relative to time t1, the new feature position information of the remote sensing image at time t3 relative to time t1, and the new feature position information of the remote sensing image at time t3 relative to time t 2.
The image can maintain the information of the original data to the maximum extent while reducing the data dimension through the PCA algorithm. The main idea of performing fusion processing on the first remote sensing image and the second remote sensing image based on the PCA algorithm is as follows: firstly, performing principal component transformation on the multispectral image, and then performing inverse principal component transformation by using the stretched high-spatial resolution image to replace a first principal component to obtain a fused image. And the first remote sensing image and the second remote sensing image are processed by using a PCA algorithm to obtain a fused image with better image spectral characteristics, so that the image enhancement is realized.
S202, processing the fusion image based on a preset newly added ground feature recognition model to obtain a recognition result of the newly added ground feature.
The identification result indicates whether the fused image contains newly added position information.
Generally, the preset newly added feature identification model can automatically identify the newly added feature in the fusion image and obtain the identification result of the newly added feature. The first remote sensing image and the second remote sensing image can be two remote sensing images containing the same ground feature, or two remote sensing images containing different ground features (for example, the second remote sensing image contains a new ground feature compared with the first remote sensing image). When the first remote sensing image and the second remote sensing image contain different ground objects, the identification result of the newly added ground object is the position information of the newly added ground object of the second remote sensing image compared with the first remote sensing image, and the position information refers to the coordinate information corresponding to the newly added ground object. And when the ground features contained in the first remote sensing image and the second remote sensing image are the same, the identification result of the newly added ground features is the position information without the newly added ground features, and the reminding information is displayed through the display unit. The reminding information is used for reminding the second remote sensing image that no new ground object is added compared with the first remote sensing image.
The preset newly added feature recognition model is obtained by training a sample image at multiple moments, and the training process of the preset newly added feature recognition model may include:
and acquiring a first sample image and a second sample image, wherein the first sample image and the second sample image are remote sensing images of the same geographical area at different moments, and the first sample image and the second sample image contain different ground objects. And cutting the first sample image to obtain a plurality of first sample small block images, and cutting the second sample image to obtain a plurality of second sample small block images, wherein the first sample small block images and the second sample small block images are in one-to-one correspondence. Respectively preprocessing the plurality of first sample small block images and the plurality of second sample small block images, and fusing the preprocessed plurality of first sample small block images and the preprocessed plurality of second sample small block images based on a PCA algorithm to obtain a plurality of sample fused images, wherein the preprocessing comprises at least one of the following items: stretching, brightness adjustment and color tone adjustment. Training the plurality of fusion labeling images based on a preset ground feature recognition model to obtain a preset newly added ground feature recognition model, wherein the preset ground feature recognition model is obtained by training a sample image at a single moment, the fusion labeling images are obtained by artificially labeling the sample fusion images, and the artificial labeling processing refers to position labeling of the newly added ground features of the sample fusion images.
The preset training process of the surface feature recognition model may include:
and acquiring a third sample image, and cutting the third sample image to obtain a plurality of third sample small block images. Preprocessing the plurality of third sample patch images, the preprocessing including at least one of: stretching, brightness adjustment and color tone adjustment. Training a plurality of sample labeling images based on a target detection Cascade R-CNN model to obtain a preset ground object recognition model, wherein the sample labeling images are obtained by carrying out manual labeling processing on a third sample small block image after preprocessing, and the sample labeling images contain position marks of ground objects.
In another possible implementation, the training process for the preset newly added feature recognition model may include:
and performing fusion processing on the first sample image and the second sample image based on a PCA algorithm to obtain a plurality of sample fusion images, wherein the first sample image and the second sample image are remote sensing images of the same geographical area at different moments, and the first sample image and the second sample image contain different ground objects. Preprocessing a plurality of sample fusion images, training a plurality of fusion labeling images based on a target detection Cascade R-CNN model to obtain a preset newly added ground object identification model, and obtaining the fusion labeling images after the sample fusion images are subjected to manual labeling, wherein the manual labeling refers to position labeling of the newly added ground objects of the sample fusion images.
According to the feature identification method provided by the scheme, the terminal performs fusion processing on the first remote sensing image and the second remote sensing image based on a Principal Component Analysis (PCA) algorithm to obtain a fusion image, the first remote sensing image and the second remote sensing image are remote sensing images of the same geographical area at different moments, the first remote sensing image and the second remote sensing image contain different features, the fusion image is processed based on a preset new feature identification model to obtain an identification result of the new feature, the first remote sensing image and the second remote sensing image contain different features, the identification result of the new feature is position information of the second remote sensing image compared with the new feature of the first remote sensing image, automatic identification of the new feature can be achieved, labor cost is reduced, and efficiency of identifying the new feature is effectively improved.
Referring to fig. 3, another flow chart of the method for recognizing a feature is provided in the embodiment of the present application. The present embodiment is exemplified by applying a feature identification method to a terminal. The surface feature identification method can comprise the following steps:
s301, a first remote sensing image and a second remote sensing image are obtained.
The remote sensing image is a film or a photo for recording the size of electromagnetic waves of various ground objects, and is mainly divided into an aerial photo and a satellite photo, and the remote sensing image processed by a computer is a digital image obtained by analog/digital (A/D) conversion; the first remote sensing image and the second remote sensing image are remote sensing images of the same geographical area at different moments, and the first remote sensing image and the second remote sensing image can be two remote sensing images containing the same ground object or two remote sensing images containing different ground objects (for example, the second remote sensing image contains a new ground object compared with the first remote sensing image). The fused image is an image obtained by processing the first remote sensing image and the second remote sensing image through a PCA algorithm, and comprises the information content of the first remote sensing image and the second remote sensing image. Generally, the first remote sensing image and the second remote sensing image have a certain span at a time point, so that newly added ground objects may appear in the area where the first remote sensing image and the second remote sensing image are located.
S302, the first remote sensing image is cut to obtain a plurality of first small images, and the second remote sensing image is cut to obtain a plurality of second small images.
The first small block image is obtained by randomly cutting the first remote sensing image, and a plurality of small block images corresponding to the first remote sensing image can be obtained by randomly cutting the first remote sensing image. The second small block image is obtained by randomly cropping the second remote sensing image, and a plurality of small block images corresponding to the second remote sensing image can be obtained by randomly cropping the second remote sensing image. The geographic positions included in the first small images correspond to the geographic positions included in the second small images one to one.
Generally, an original remote sensing image can be cropped to obtain a plurality of small images corresponding to the original remote sensing image, and the process of cropping the remote sensing image to obtain the small images, that is, the process of capturing a part of the original remote sensing image to generate a new image, is also included.
S303, respectively preprocessing the plurality of first small block images and the plurality of second small block images.
Wherein the pre-treatment comprises at least one of: stretching, brightness adjustment and color tone adjustment.
Generally, the remote sensing images acquired at different times have non-uniform brightness, tone, angle and the like due to the influence of changes such as seasons, weather, cloud layer shielding, orthorectification and the like in the process of acquiring the remote sensing images. The cut first small images and the second small images can be subjected to stretching, brightness adjustment, color tone adjustment and other preprocessing operations, so that the brightness, the color tone and the image angle of the first small images and the second small images can be unified, and the first small images and the second small images can be conveniently fused in the follow-up process.
S304, based on the PCA algorithm, the plurality of first small images and the plurality of second small images are fused to obtain a fused image.
The fused image is an image obtained by processing the first remote sensing image and the second remote sensing image through a PCA algorithm, namely, an image obtained by processing the plurality of first small images and the plurality of second small images through the PCA algorithm, and comprises information content of the first remote sensing image and the second remote sensing image. The number of the first small block images is consistent with that of the second small block images, and the number of the fusion images is also consistent with that of the first small block images and that of the second small block images.
Generally, the PCA-based algorithm can perform image fusion processing on the plurality of first small images and the plurality of second small images, so that the brightness and the color tone of a plurality of fused images obtained by fusing the plurality of first small images and the plurality of second small images are consistent, and a newly added feature in the fused image can be remarkably highlighted.
The image can maintain the information of the original data to the maximum extent while reducing the data dimension through the PCA algorithm. The main idea of performing fusion processing on a plurality of first small block images and a plurality of second small block images based on the PCA algorithm is as follows: firstly, performing principal component transformation on the multispectral image, and then performing inverse principal component transformation by using the stretched high-spatial resolution image to replace a first principal component to obtain a fused image. The plurality of first small images and the plurality of second small images are processed by using a PCA algorithm to obtain a fused image with better image spectral characteristics, so that image enhancement is realized. Generally, for the plurality of first small block images and the plurality of second small block images, which can be images of 512 × 512 pixels and 4 channels, the plurality of fused images obtained after the dimensionality reduction processing by the PCA algorithm are images of 512 × 512 pixels and 3 channels; the 4 channels refer to four bands: blue wave band, green wave band, red wave band and infrared band, 3 passageway refer to three wave bands: blue band, green band, red band.
S305, acquiring a third sample image, and performing cropping processing on the third sample image to obtain a plurality of third sample small block images.
The third sample image is a remote sensing image which contains a large number of ground features and is at a single moment, and is used as sample data for training a preset ground feature recognition model at the single moment. The third sample small block image is a small block image obtained by randomly cropping the third sample image.
S306, preprocessing the third sample small images, and training the sample marked images based on the target detection Cascade R-CNN model to obtain a preset ground feature recognition model.
Wherein the pre-treatment comprises at least one of: stretching, brightness adjustment and color tone adjustment. And the sample labeling image is obtained by carrying out manual labeling processing on the preprocessed third sample small block image, and the sample labeling image comprises the position identification of the ground feature.
Generally, the third sample patch images are preprocessed to make the third sample patch images uniform in brightness and color tone, and the size of the preprocessed third patch images is 512 × 512 pixels, and the RGB patch images of 3 channels. And carrying out ground object labeling on the preprocessed third sample small images in a mode of manually labeling ground objects, thereby obtaining sample data of a preset ground object recognition model which can be used for training a single moment, namely the sample labeled images. And taking the sample marked image as sample data, and performing iterative training on the Cascade R-CNN model to generate a ground feature recognition model capable of recognizing ground features, namely a preset ground feature recognition model. The Cascade R-CNN model used in this application can be seen in FIG. 4.
The Cascade R-CNN model is a Cascade detection model formed by a series of detection models, each detection model is obtained by training based on positive and negative samples of different overlapping degree (IOU) thresholds, the output of the former detection model is used as the input of the latter detection model, and the threshold of the detection model is larger the farther the Cascade position is. The Cascade R-CNN model is more effective than detection models trained based on other thresholds, so the IOU threshold for each detection model is as close as possible to the IOU threshold of the input candidate box. The Cascade R-CNN model adopts a cascading mode to enable the detection model of each stage to focus on detecting candidate frames with the IOU threshold value within a certain range, and the detection effect is better and better because the output IOU threshold value is generally larger than the input IOU threshold value.
S307, a first sample image and a second sample image are obtained.
The first sample image and the second sample image are sample remote sensing images of the same geographic area at different moments, and a certain span exists between a time point corresponding to the first sample image and a time point corresponding to the second sample image, so that a new ground object can appear in the geographic area within the time period (from the time point corresponding to the first sample image to the time point corresponding to the second sample image).
S308, the first sample image is cut to obtain a plurality of first sample small images, and the second sample image is cut to obtain a plurality of second sample small images.
The first sample patch image is a sample patch image obtained by randomly cropping the first sample image, and the plurality of sample patch images corresponding to the first sample image can be obtained by randomly cropping the first sample image. The second sample patch image is a sample patch image obtained by randomly cropping the second sample image, and the plurality of sample patch images corresponding to the second sample image can be obtained by randomly cropping the second sample image.
Generally, a plurality of original sample remote sensing images at different times can be cropped to obtain a plurality of sample small images corresponding to the original sample remote sensing images, and the process of cropping the sample remote sensing images to obtain the sample small images is also a process of capturing a certain part of the original sample remote sensing images to generate a new sample image.
S309, respectively preprocessing the plurality of first sample small block images and the plurality of second sample small block images, and fusing the preprocessed plurality of first sample small block images and the preprocessed plurality of second sample small block images based on a PCA algorithm to obtain a plurality of sample fused images.
The sample fusion image is an image obtained by processing a plurality of first sample small-block images and a plurality of second sample small-block images through a PCA algorithm, and includes information content of the first sample image and the second sample image. The number of the first sample small block images is consistent with the number of the second sample small block images, and the number of the sample fusion images is also consistent with the number of the first sample small block images and the number of the second sample small block images.
Generally, the cut first sample small block images and the second sample small block images are subjected to preprocessing operations such as stretching, brightness adjustment, color tone adjustment and the like, so that the brightness, the color tone and the image angle of the first sample small block images and the second sample small block images can be unified, and the subsequent fusion processing of the first sample small block images and the second sample small block images is facilitated. Based on the PCA algorithm, the image fusion processing can be carried out on the plurality of first sample small block images and the plurality of second sample small block images, the brightness and the color tone of a plurality of fusion images obtained by fusing the plurality of first sample small block images and the plurality of second sample small block images can be ensured to be consistent, and newly added ground objects in the fusion images can be remarkably highlighted.
S310, training the multiple fusion labeling images based on a preset ground feature recognition model to obtain a preset newly added ground feature recognition model.
The preset ground feature identification model is obtained by training a sample image (third sample image) at a single moment, the fusion labeling image is obtained by performing manual labeling processing on the sample fusion image, and the manual labeling processing refers to performing position labeling on a newly added ground feature of the sample fusion image.
Generally, a plurality of first sample small block images and a plurality of second sample small block images are respectively preprocessed, so that the plurality of first sample small block images and the plurality of second sample small block images can be uniform in brightness and color tone, and the preprocessed plurality of first sample small block images and the preprocessed plurality of second sample small block images are all 512 × 512 pixels and 4-channel plurality of sample small block images. After the plurality of first sample small block images and the plurality of second sample small block images are subjected to PCA fusion processing, a plurality of sample fusion images can be obtained. And then, carrying out new ground object labeling on the preprocessed multiple first sample small block images and the preprocessed multiple second sample small block images respectively in a mode of manually labeling new ground objects, so as to obtain sample data of a preset new ground object identification model which can be used for training multiple moments, namely, fusing the labeled images. And taking the fused and labeled image as sample data, and performing iterative training on the pre-trained preset ground feature identification model to generate a new ground feature identification model capable of identifying the new ground feature, namely the preset new ground feature identification model. The preset newly added feature identification module can also identify newly added features of a plurality of remote sensing images of the same geographic area at different moments, and is not limited to the newly added feature identification of the remote sensing images of the same geographic area at two moments.
S311, processing the fusion image based on a preset newly added feature recognition model to obtain a recognition result of the newly added feature.
The identification result indicates whether the fused image contains newly added position information.
Generally, newly-added feature recognition is performed on a plurality of fusion images subjected to PCA fusion processing based on a newly-added feature recognition model trained in advance, and if the fusion images contain newly-added features, the recognition result of the newly-added features can be obtained as position information of the newly-added features of the second remote sensing image compared with the first remote sensing image; and if the fusion image does not contain the newly added ground object, the identification result of the newly added ground object is the position information without the newly added ground object, and the reminding information is displayed through the display unit, wherein the reminding information is the reminding information about the second remote sensing image without the newly added ground object compared with the first remote sensing image.
For example, the following steps are carried out: the remote sensing image at the time t1 and the remote sensing image at the time t2 are two remote sensing images in the same geographic area at different times, and the terminal processes the remote sensing image at the time t1 and the remote sensing image at the time t2 based on a preset new ground feature recognition model. If the remote sensing image at the time t2 has a new feature compared with the remote sensing image at the time t1, the position information of the new feature is obtained; and if the remote sensing image at the time t2 has no new ground object compared with the remote sensing image at the time t1, displaying a reminding message of 'no new ground object appears at present' through a display unit of the terminal.
According to the embodiment of the application, on the training strategy of the model, the preset ground object recognition model at a single time point is adjusted and optimized in a transfer learning mode, so that the preset newly-added ground object recognition models at multiple time points are quickly trained, the number of samples needing to be marked can be greatly reduced, and the labor cost is reduced. Meanwhile, on the basis of the marking sample of the preset surface feature identification model at a single time point, the marking sample of the multi-time-point model can be generated quickly, and the time cost of training the model and marking the sample can be effectively reduced.
As can be seen from the above, in the surface feature identification method provided by this embodiment, the terminal acquires a first remote sensing image and a second remote sensing image, cuts the first remote sensing image to obtain a plurality of first small images, cuts the second remote sensing image to obtain a plurality of second small images, respectively preprocesses the plurality of first small images and the plurality of second small images, fuses the plurality of first small images and the plurality of second small images based on a PCA algorithm to obtain a fused image, acquires a third sample image, cuts the third sample image to obtain a plurality of third sample small images, preprocesses the plurality of third sample small images, trains the plurality of sample labeled images based on a target detection Cascade R-CNN model to obtain a preset surface feature identification model, acquires the first sample image and the second sample image, cutting the first sample image to obtain a plurality of first sample small images, cutting the second sample image to obtain a plurality of second sample small images, respectively preprocessing the plurality of first sample small images and the plurality of second sample small images, and the preprocessed plurality of first sample small images and the preprocessed plurality of second sample small images are fused based on a PCA algorithm to obtain a plurality of sample fused images, the plurality of fused labeled images are trained based on a preset ground feature recognition model to obtain a preset newly added ground feature recognition model, the fused images are processed based on the preset newly added ground feature recognition model to obtain a recognition result of the newly added ground feature, and when the first remote sensing image and the second remote sensing image contain different ground objects, the identification result of the newly added ground object is the position information of the newly added ground object of the second remote sensing image compared with the first remote sensing image. Therefore, the newly added ground objects in the remote sensing image can be automatically identified, the labor cost and time are saved, and the accuracy and efficiency of identifying the newly added ground objects can be improved.
Referring to fig. 5, another flow chart of the method for recognizing the ground object is provided in the embodiment of the present application. The present embodiment is exemplified by applying a feature identification method to a terminal. The surface feature identification method can comprise the following steps:
s501, a first remote sensing image and a second remote sensing image are obtained.
Please refer to the step S301, which is not described herein again.
S502, the first remote sensing image is cut to obtain a plurality of first small images, and the second remote sensing image is cut to obtain a plurality of second small images.
Please refer to the step S302, which is not described herein again.
S503, preprocessing the plurality of first small block images and the plurality of second small block images, respectively.
Please refer to the step S303, which is not described herein again.
And S504, fusing the plurality of first small images and the plurality of second small images based on the PCA algorithm to obtain a fused image.
Please refer to the step S304, which is not described herein again.
And S505, performing fusion processing on the first sample image and the second sample image based on a PCA algorithm to obtain a plurality of sample fusion images.
The first sample image and the second sample image are sample remote sensing images of the same geographic area at different moments, and a certain span exists between a time point corresponding to the first sample image and a time point corresponding to the second sample image, so that a new ground object can appear in the geographic area within the time period (from the time point corresponding to the first sample image to the time point corresponding to the second sample image). The sample fusion image is an image obtained by processing a plurality of first sample small-block images and a plurality of second sample small-block images through a PCA algorithm, and comprises information content of the first sample image and the second sample image. The number of the first sample small block images is consistent with the number of the second sample small block images, and the number of the sample fusion images is also consistent with the number of the first sample small block images and the number of the second sample small block images.
Generally, after a terminal acquires a first sample image and a second sample image, a plurality of first sample small-block images are obtained by cutting the first sample image, and a plurality of second sample small-block images are obtained by cutting the second sample image, that is, a plurality of original sample remote sensing images at different times can be cut to obtain a plurality of sample small-block images corresponding to the original sample remote sensing images, and the sample remote sensing images are cut to obtain the sample small-block images, that is, a process of capturing a certain part of the original sample remote sensing images to generate a new sample image is also performed. Based on the PCA algorithm, the image fusion processing can be carried out on the plurality of first sample small block images and the plurality of second sample small block images, the brightness and the color tone of a plurality of fusion images obtained by fusing the plurality of first sample small block images and the plurality of second sample small block images can be ensured to be consistent, and newly added ground objects in the fusion images can be remarkably highlighted.
S506, preprocessing the multiple sample fusion images, and training the multiple fusion labeling images based on the target detection Cascade R-CNN model to obtain a preset newly added ground feature recognition model.
The fusion labeling image is obtained by performing manual labeling processing on the sample fusion image, wherein the manual labeling processing refers to performing position labeling on a newly added ground object of the sample fusion image.
Generally, a plurality of sample fused images can be obtained by performing PCA fusion processing on a plurality of first sample patch images and a plurality of second sample patch images, and preprocessing operations such as stretching, brightness adjustment, color tone adjustment and the like are performed on the plurality of sample fused images, so that the brightness, the color tone and the image angle of the plurality of sample fused images can be unified. And performing ground object labeling on the preprocessed multiple sample fusion images in a mode of manually labeling ground objects, thereby obtaining sample data of a preset newly added ground object identification model which can be used for training multiple moments, namely, fusion labeling images. And taking the fused and labeled image as sample data, and performing iterative training on the Cascade R-CNN model to generate a ground object recognition model capable of recognizing the newly added ground object, namely a preset newly added ground object recognition model.
The Cascade R-CNN model is a Cascade detection model formed by a series of detection models, each detection model is obtained by training based on positive and negative samples of different overlapping degree (IOU) thresholds, the output of the former detection model is used as the input of the latter detection model, and the threshold of the detection model is larger the farther the Cascade position is. The Cascade R-CNN model is more effective than detection models trained based on other thresholds, so the IOU threshold for each detection model is as close as possible to the IOU threshold of the input candidate box. The Cascade R-CNN model adopts a cascading mode to enable the detection model of each stage to focus on detecting candidate frames with the IOU threshold value within a certain range, and the detection effect is better and better because the output IOU threshold value is generally larger than the input IOU threshold value.
And S507, processing the fusion image based on a preset newly added ground feature recognition model to obtain a recognition result of the newly added ground feature.
Please refer to the step S311, which is not described herein again.
As can be seen from the above, in the feature recognition method provided by the present disclosure, a terminal obtains a first remote sensing image and a second remote sensing image, cuts the first remote sensing image to obtain a plurality of first small images, cuts the second remote sensing image to obtain a plurality of second small images, respectively preprocesses the plurality of first small images and the plurality of second small images, fuses the plurality of first small images and the plurality of second small images based on a PCA algorithm to obtain a fused image, fuses the first sample image and the second sample image based on the PCA algorithm to obtain a plurality of sample fused images, preprocesses the plurality of sample fused images, trains the plurality of fused labeled images based on a target detection Cascade R-CNN model to obtain a preset new feature recognition model, processes the fused image based on the preset new feature recognition model to obtain a recognition result of a new feature, when the first remote sensing image and the second remote sensing image contain different ground objects, the identification result of the newly added ground object is the position information of the newly added ground object of the second remote sensing image compared with the first remote sensing image, so that the newly added ground object in the remote sensing images can be automatically identified, the labor cost is reduced, and the accuracy and the efficiency of identifying the newly added ground object are improved.
Referring to the schematic diagram of the added feature recognition effect shown in fig. 6, the left image and the right image are derived from remote sensing images of the same geographic area at different times (the left image corresponds to the first remote sensing image, and the right image corresponds to the second remote sensing image), wherein the time point of the second remote sensing image is located behind the first remote sensing image, and the added feature appears in the second remote sensing image. The left image is an effect image that a pre-trained feature recognition model (a preset feature recognition model) can recognize features of the remote sensing image, and the preset feature recognition model can recognize all the features in the first remote sensing image, such as a feature 601 marked by a square frame in the left image, and other marked features can be seen in the left image. The right image is an effect image that a newly added feature recognition model (a preset newly added feature recognition model) trained in advance can recognize a newly added feature of the remote sensing image, the preset newly added feature recognition model can recognize a newly added feature in the second remote sensing image compared with the feature in the first remote sensing image, such as a newly added feature 603 marked by a square frame in the right image, and an original feature 602 in the left image, and other newly added features and other original features marked in the right image can also be seen.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 7, a schematic structural diagram of a surface feature recognition device according to an exemplary embodiment of the present application is shown, which is hereinafter referred to as the recognition device 7. The identification means 7 may be implemented as all or part of the terminal by software, hardware or a combination of both. The recognition means 7 are applied to the terminal, the recognition means 7 comprising:
the first processing module 701 is used for performing fusion processing on the first remote sensing image and the second remote sensing image based on a Principal Component Analysis (PCA) algorithm to obtain a fused image; the first remote sensing image and the second remote sensing image are remote sensing images of the same geographical area at different moments;
a second processing module 702, configured to process the fusion image based on a preset newly added feature identification model to obtain an identification result of the newly added feature.
Optionally, the first remote sensing image and the second remote sensing image of the device 7 include different features, and the identification result of the new feature is position information of the new feature of the second remote sensing image compared with the first remote sensing image.
Optionally, the first remote sensing image and the second remote sensing image of the device 7 contain the same ground features, and the reminding information is displayed through a display unit.
Optionally, the first processing module 701 further includes:
the first acquisition unit is used for acquiring the first remote sensing image and the second remote sensing image;
the first processing unit is used for cutting the first remote sensing image to obtain a plurality of first small images and cutting the second remote sensing image to obtain a plurality of second small images;
and the second processing unit is used for carrying out fusion processing on the plurality of first small block images and the plurality of second small block images based on the PCA algorithm to obtain the fusion image.
Optionally, the first processing module 701 further includes:
the preprocessing unit is used for respectively preprocessing the plurality of first small block images and the plurality of second small block images; wherein the first small block image and the second small block image correspond to each other one by one, and the preprocessing includes at least one of: stretching, brightness adjustment and color tone adjustment.
Optionally, the apparatus 7 further includes:
the third processing unit is used for carrying out fusion processing on the first sample image and the second sample image based on the PCA algorithm to obtain a plurality of sample fusion images; the first sample image and the second sample image are remote sensing images of the same geographical area at different moments, and the first sample image and the second sample image contain different ground objects;
the first training unit is used for training the multiple fusion labeling images based on a preset ground feature recognition model to obtain a preset newly added ground feature recognition model; the preset ground feature recognition model is obtained by training a sample image at a single moment, the fusion labeling image is obtained by performing manual labeling processing on the sample fusion image, and the manual labeling processing refers to performing position labeling on a newly added ground feature of the sample fusion image.
Optionally, the apparatus 7 further includes:
acquiring the first sample image and the second sample image;
a second obtaining unit, configured to perform cropping processing on the first sample image to obtain a plurality of first sample small block images, and perform cropping processing on the second sample image to obtain a plurality of second sample small block images;
and the fourth processing unit is used for respectively preprocessing the plurality of first sample small block images and the plurality of second sample small block images, and fusing the plurality of preprocessed first sample small block images and the plurality of preprocessed second sample small block images based on the PCA algorithm to obtain the plurality of sample fused images.
Optionally, the apparatus 7 further includes:
the third acquisition unit is used for acquiring a third sample image and cutting the third sample image to obtain a plurality of third sample small images;
the fifth processing unit is used for preprocessing the third sample small images and training the sample marked images based on a target detection Cascade R-CNN model to obtain a preset ground object recognition model; the sample labeling image is obtained by carrying out manual labeling processing on a preprocessed third sample small block image, and the sample labeling image comprises position identification of ground objects.
Optionally, the apparatus 7 further includes:
the sixth processing unit is used for carrying out fusion processing on the first sample image and the second sample image based on the PCA algorithm to obtain a plurality of sample fusion images; the first sample image and the second sample image are remote sensing images of the same geographical area at different moments, and the first sample image and the second sample image contain different ground objects;
the seventh processing unit is used for preprocessing the multiple sample fusion images and training the multiple fusion labeling images based on a target detection Cascade R-CNN model to obtain the preset newly added ground object recognition model; the fusion labeling image is obtained by performing manual labeling processing on the sample fusion image, wherein the manual labeling processing refers to performing position labeling on newly added ground objects of the sample fusion image.
It should be noted that, when the feature recognition apparatus provided in the foregoing embodiment executes the feature recognition method, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed to different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the functions described above. In addition, the feature identification device provided by the above embodiment and the feature identification method embodiment belong to the same concept, and details of implementation processes thereof are referred to the method embodiment, and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
An embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the above method steps, and a specific execution process may refer to specific descriptions of embodiments shown in fig. 2 to 6, which are not described herein again.
The application also provides a terminal, which comprises a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
Referring to fig. 8, a schematic structural diagram of a terminal according to an embodiment of the present invention is shown, where the terminal may be used to implement the surface feature identification method in the foregoing embodiment. Specifically, the method comprises the following steps:
the memory 803 may be used to store software programs and modules, and the processor 800 executes various functional applications and data processing by operating the software programs and modules stored in the memory 803. The memory 803 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal device, and the like. Further, the memory 803 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 803 may also include a memory controller to provide the processor 800 and the input unit 805 access to the memory 803.
The input unit 805 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 805 may include a touch-sensitive surface 806 (e.g., a touch screen, a touchpad, or a touch frame). The touch-sensitive surface 806, also referred to as a touch display screen or a touch pad, may collect touch operations by a user on or near the touch-sensitive surface 806 (e.g., operations by a user on or near the touch-sensitive surface 806 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch-sensitive surface 806 may include both touch detection means and touch controller portions. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 800, and can receive and execute commands sent by the processor 800. Additionally, the touch-sensitive surface 806 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves.
The display unit 813 may be used to display information input by a user or information provided to the user and various graphic user interfaces of the terminal device, which may be configured by graphics, text, icons, video, and any combination thereof. The Display unit 813 may include a Display panel 814, and optionally, the Display panel 814 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, touch-sensitive surface 806 may overlay display panel 814 and, when touch-sensitive surface 806 detects a touch operation thereon or nearby, communicate to processor 800 to determine the type of touch event, and processor 800 then provides a corresponding visual output on display panel 814 based on the type of touch event. Although in FIG. 8, touch-sensitive surface 806 and display panel 814 are shown as two separate components to implement input and output functions, in some embodiments, touch-sensitive surface 806 may be integrated with display panel 814 to implement input and output functions.
The processor 800 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 803 and calling data stored in the memory 803, thereby monitoring the terminal device as a whole. Optionally, processor 800 may include one or more processing cores; processor 800 may, among other things, integrate an application processor that handles operating system, user interface, application programs, etc., and a modem processor that handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 800.
Specifically, in this embodiment, the display unit of the terminal device is a touch screen display, the terminal device further includes a memory and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include steps for implementing the method.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
All functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A method for recognizing a feature, the method comprising:
performing fusion processing on the first remote sensing image and the second remote sensing image based on a Principal Component Analysis (PCA) algorithm to obtain a fused image; the first remote sensing image and the second remote sensing image are remote sensing images of the same geographical area at different moments;
and processing the fusion image based on a preset newly added ground object identification model to obtain an identification result of the newly added ground object.
2. The method according to claim 1, wherein the first remote sensing image and the second remote sensing image contain different ground features, and the identification result of the newly added ground feature is position information of the newly added ground feature of the second remote sensing image compared with the first remote sensing image.
3. The method according to claim 1, wherein the first remote sensing image and the second remote sensing image contain the same ground features, and a reminding message is displayed through a display unit.
4. The method of claim 1, wherein before the fusing the first remote sensing image and the second remote sensing image based on the Principal Component Analysis (PCA) algorithm to obtain the fused image, the method further comprises:
acquiring the first remote sensing image and the second remote sensing image;
the method for obtaining the fused image by fusing the first remote sensing image and the second remote sensing image based on the Principal Component Analysis (PCA) algorithm comprises the following steps:
cutting the first remote sensing image to obtain a plurality of first small images, and cutting the second remote sensing image to obtain a plurality of second small images;
and fusing the plurality of first small block images and the plurality of second small block images based on the PCA algorithm to obtain a fused image.
5. The method according to claim 4, wherein after the cropping of the first remote-sensing image to obtain a plurality of first tile images and the cropping of the second remote-sensing image to obtain a plurality of second tile images, before the fusing of the plurality of first tile images and the plurality of second tile images based on the PCA algorithm to obtain the fused image, the method further comprises:
respectively preprocessing the plurality of first small block images and the plurality of second small block images; wherein the first small block image and the second small block image correspond to each other one by one, and the preprocessing includes at least one of: stretching, brightness adjustment and color tone adjustment.
6. The method according to claim 1, wherein before the processing the fusion image based on the predetermined additional feature recognition model to obtain the recognition result of the additional feature, the method further comprises:
performing fusion processing on the first sample image and the second sample image based on the PCA algorithm to obtain a plurality of sample fusion images; the first sample image and the second sample image are remote sensing images of the same geographical area at different moments, and the first sample image and the second sample image contain different ground objects;
training a plurality of fusion labeling images based on a preset ground feature recognition model to obtain a preset newly added ground feature recognition model; the preset ground feature recognition model is obtained by training a sample image at a single moment, the fusion labeling image is obtained by performing manual labeling processing on the sample fusion image, and the manual labeling processing refers to performing position labeling on a newly added ground feature of the sample fusion image.
7. The method of claim 6, wherein before the fusing the first sample image and the second sample image based on the PCA algorithm to obtain a plurality of sample fused images, the method further comprises:
acquiring the first sample image and the second sample image;
the obtaining of a plurality of sample fused images by fusing the first sample image and the second sample image based on the PCA algorithm includes:
cutting the first sample image to obtain a plurality of first sample small block images, and cutting the second sample image to obtain a plurality of second sample small block images;
and respectively preprocessing the plurality of first sample small block images and the plurality of second sample small block images, and fusing the plurality of preprocessed first sample small block images and the plurality of preprocessed second sample small block images based on the PCA algorithm to obtain a plurality of sample fused images.
8. The method according to claim 6, wherein before the training of the fused annotation images based on the predetermined feature recognition model to obtain the predetermined new feature recognition model, the method further comprises:
acquiring a third sample image, and cutting the third sample image to obtain a plurality of third sample small block images;
preprocessing the third sample small images, and training the sample marked images based on a target detection Cascade R-CNN model to obtain a preset ground object recognition model; the sample labeling image is obtained by carrying out manual labeling processing on a preprocessed third sample small block image, and the sample labeling image comprises position identification of ground objects.
9. The method according to claim 1, wherein before the processing the fusion image based on the predetermined additional feature recognition model to obtain the recognition result of the additional feature, the method further comprises:
performing fusion processing on the first sample image and the second sample image based on the PCA algorithm to obtain a plurality of sample fusion images; the first sample image and the second sample image are remote sensing images of the same geographical area at different moments, and the first sample image and the second sample image contain different ground objects;
preprocessing the multiple sample fusion images, and training the multiple fusion labeling images based on a target detection Cascade R-CNN model to obtain a preset newly added ground object recognition model; the fusion labeling image is obtained by performing manual labeling processing on the sample fusion image, wherein the manual labeling processing refers to performing position labeling on newly added ground objects of the sample fusion image.
10. A ground recognition apparatus, characterized in that the apparatus comprises:
the first processing module is used for carrying out fusion processing on the first remote sensing image and the second remote sensing image based on a Principal Component Analysis (PCA) algorithm to obtain a fused image; the first remote sensing image and the second remote sensing image are remote sensing images of the same geographical area at different moments;
and the second processing module is used for processing the fusion image based on a preset newly added ground object identification model to obtain an identification result of the newly added ground object.
11. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to carry out the method steps according to any one of claims 1 to 9.
12. A terminal, comprising: a processor, a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1 to 9.
CN202011023367.7A 2020-09-25 2020-09-25 Ground object identification method, device, storage medium and terminal Pending CN112287756A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011023367.7A CN112287756A (en) 2020-09-25 2020-09-25 Ground object identification method, device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011023367.7A CN112287756A (en) 2020-09-25 2020-09-25 Ground object identification method, device, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN112287756A true CN112287756A (en) 2021-01-29

Family

ID=74421319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011023367.7A Pending CN112287756A (en) 2020-09-25 2020-09-25 Ground object identification method, device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN112287756A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342449A (en) * 2023-03-29 2023-06-27 银河航天(北京)网络技术有限公司 Image enhancement method, device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342449A (en) * 2023-03-29 2023-06-27 银河航天(北京)网络技术有限公司 Image enhancement method, device and storage medium
CN116342449B (en) * 2023-03-29 2024-01-16 银河航天(北京)网络技术有限公司 Image enhancement method, device and storage medium

Similar Documents

Publication Publication Date Title
CN102567967B (en) For the air of spatial image and the technology of solar correction
CN109684980B (en) Automatic scoring method and device
US20220198721A1 (en) Method, apparatus, and computer program product for training a signature encoding module and a query processing module using augmented data
CN110852332B (en) Training sample generation method and device, storage medium and electronic equipment
US20230013451A1 (en) Information pushing method in vehicle driving scene and related apparatus
CN109522807B (en) Satellite image recognition system and method based on self-generated features and electronic equipment
CN112329725B (en) Method, device and equipment for identifying elements of road scene and storage medium
CN112733688B (en) House attribute value prediction method and device, terminal device and computer readable storage medium
CN105120237A (en) Wireless image monitoring method based on 4G technology
CN112801158A (en) Deep learning small target detection method and device based on cascade fusion and attention mechanism
CN114972581A (en) Remote sensing image labeling method, device, system, equipment and storage medium
US9571801B2 (en) Photographing plan creation device and program and method for the same
CN115240089A (en) Vehicle detection method of aerial remote sensing image
CN113205515A (en) Target detection method, device and computer storage medium
CN112287756A (en) Ground object identification method, device, storage medium and terminal
CN104636743A (en) Character image correction method and device
CN114140637A (en) Image classification method, storage medium and electronic device
KR101877173B1 (en) Coastline Detection System using Satellite Image and method thereof
CN116192822A (en) Screen display communication control method and device, 5G firefighting intercom mobile phone and medium
CN113269730B (en) Image processing method, image processing device, computer equipment and storage medium
US11418716B2 (en) Spherical image based registration and self-localization for onsite and offsite viewing
CN114332682A (en) Marine panoramic defogging target identification method
CN115100534B (en) Forest dominant tree species identification method, device, equipment and storage medium
CN117115693A (en) Wheat lodging identification method and device for unmanned harvester and storage medium
CN118097562B (en) Remote monitoring method for seaweed proliferation condition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination