CN115170406A - High-precision fusion method for optical image and SAR (synthetic aperture radar) intensity image - Google Patents

High-precision fusion method for optical image and SAR (synthetic aperture radar) intensity image Download PDF

Info

Publication number
CN115170406A
CN115170406A CN202210651992.9A CN202210651992A CN115170406A CN 115170406 A CN115170406 A CN 115170406A CN 202210651992 A CN202210651992 A CN 202210651992A CN 115170406 A CN115170406 A CN 115170406A
Authority
CN
China
Prior art keywords
image
sar
fusion
images
optical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210651992.9A
Other languages
Chinese (zh)
Inventor
张洋
刘辉
刘嵘
贾然
李丹丹
周超
沈浩
刘传彬
李程启
曹付勇
贾明亮
蔡英明
张国飞
李鹏飞
陈子龙
吴洪勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202210651992.9A priority Critical patent/CN115170406A/en
Publication of CN115170406A publication Critical patent/CN115170406A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a high-precision fusion method of an optical image and an SAR intensity image, which comprises the steps of establishing an image coordinate system, determining the position of each pixel of two data sources, calculating the deviation between the SAR image and an optical image of a basic map, achieving sub-pixel level registration precision by using a frequency spectrum diversity method, reducing fusion blur, enabling the SAR and optical registration precision to be less than 0.1 pixel, then carrying out HSI transformation on two RGB images to obtain respective brightness components, then carrying out wavelet transformation on the brightness components, selecting a certain fusion rule to obtain new brightness components, and then carrying out inverse transformation to obtain fused RGB images. The method overcomes the difficulties that the cloud shielding is difficult to obtain by singly using the optical image, makes up the defect that the SAR image lacks the spectral information of the ground object, improves the monitoring accuracy of the safety environment of the power transmission and transformation equipment, and provides first-hand image data for timely discovering the power transmission line and the surrounding disasters.

Description

High-precision fusion method of optical image and SAR (synthetic aperture radar) intensity image
Technical Field
The invention relates to the technical field of high-precision fusion of heterogeneous satellite remote sensing images, in particular to a high-precision fusion method of an optical image and an SAR (synthetic aperture radar) intensity image.
Background
The high-spatial-resolution optical satellite remote sensing image can realize monitoring of the power transmission and transformation operation environment, and obtain vegetation coverage conditions, topographic features, newly increased houses, large-scale construction of third parties and the like in a certain range of power transmission and transformation. The spectral characteristics of the optical images can realize monitoring of various earth surface changes in the operating environment of power transmission and transformation, but the optical images with high spatial resolution cannot penetrate through cloud layers, the revisiting period is long, the acquisition difficulty is high, a high-quality image is difficult to acquire for several months, and if the optical images with medium resolution are used, the safety monitoring of power transmission and transformation equipment is difficult to realize due to low spatial resolution. Synthetic Aperture Radar (SAR) has the advantages of being all-weather and all-weather, not influenced by cloud and rain, and being capable of acquiring dynamic information of the ground surface in time by using the SAR image with high spatial resolution, but the SAR image does not contain spectral information of ground objects, and the ground objects can be better identified by combining with the optical image. In view of this situation, a technology for acquiring power transmission and transformation equipment by fusing a high spatial resolution SAR and a medium spatial resolution optical image has been developed.
Disclosure of Invention
The invention aims to provide a method which can fuse an optical image and an SAR intensity image with high precision, not only has the characteristic that the SAR image can monitor the change of the earth surface in time, but also can well reserve the advantage that the optical image can identify an earth surface object.
The invention is realized by the following technical scheme in order to realize the purpose.
A high-precision fusion method of an optical image and an SAR intensity image comprises the following steps:
acquiring an SAR image S1 of a high spatial resolution X wave band in a research area and a medium spatial resolution optical image L1 of the same width shot in the same day in the same area;
performing data preprocessing on the images S1 and L1 to respectively obtain preprocessed images S2 and L2;
performing sub-pixel level registration on the images S2 and L2, using the L2 as an extrusion base map, and registering the S2 to the L2 by using a frequency spectrum classification method to reduce fusion blurring;
and performing fusion processing by using the registered high-resolution SAR image S2 to re-resolve the optical images, normalizing the two data sources into a gray-scale image, performing HSI (high-resolution ratio) transformation on the two RGB images to obtain respective brightness components, performing wavelet transformation on the brightness components, selecting a fusion rule, obtaining new brightness components, and finally performing inverse transformation to obtain a fused RGB image.
Further, the preprocessing of S1 includes: track correction, image cropping, thermal noise removal and terrain correction using DEM data.
Further, the preprocessing of the L1 comprises: radiometric calibration, atmospheric correction, orthorectified, image fusion, image enhancement, and region of interest cropping.
Further, the sub-pixel level registration of the images S2 and L2 includes the steps of:
pixel resampling is carried out, pixel values of 4 adjacent points are used for sampling the L2 image to be consistent with the S2 pixel interval by a bilinear interpolation method, and a resampled image L3 is obtained;
registering the S2 and the L3, learning the mapping relation between the visible light and the remote sensing SAR images by using a multi-constrained GAN network, expanding the number and diversity of training samples by using a trained model, extracting features by using a neural network to perform image block matching prediction, generating an offset parameter file after coarse registration, performing offset estimation, finally removing the offset by using the trained model to obtain sub-pixel level registration, and respectively deriving new SAR images and optical images S3 and L4.
Furthermore, the image fusion adopts a texture weighting enhancement method to make the texture features of the SAR image clearer, and the method specifically comprises the following steps:
and after the noise reduction is finished, reconstructing an image, strengthening the texture of the region with strong backscattering, enhancing the weight ratio of the region, weakening the texture of the region with weak backscattering, highlighting the texture of the ground object, and marking the SAR for enhancing the texture information as S4. The formula is as follows:
Figure BDA0003688046190000031
wherein,
Figure BDA0003688046190000032
is a constant for self-defining the weight coefficient, S3 w For the image of the region with strong backscattering, S3 v The image of the area with weak backward scattering.
The invention has the advantages that:
the method is based on the heterogeneous satellite remote sensing image, the characteristic information in the two images is extracted for image fusion, the difficulties that the cloud shielding is difficult to obtain when the optical image is used independently are overcome, the defect that the SAR image lacks the spectral information of the ground features is overcome, the safety environment monitoring accuracy of the power transmission and transformation equipment is improved, and first-hand image data are provided for timely discovering disasters such as external damage of a power transmission line and a peripheral third party, large-scale construction, landslide and the like.
The SAR and the optical image are registered by using a frequency spectrum grading method, so that fusion blurring caused by low registration precision is reduced.
The texture information of the SAR image is enhanced by using a texture weighting fusion algorithm, so that the fused image contains strong texture and strong spectrum, and the change condition of the surrounding environment of the power transmission line can be effectively identified.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments.
A high-precision fusion method of an optical image and an SAR intensity image comprises the following steps:
s1, acquiring an SAR image S1 of a high spatial resolution X wave band in a research area and a medium spatial resolution optical image L1 of the same width shot on the same day in the same area, and performing data preprocessing on the images S1 and L1:
preprocessing of the image S1 mainly includes orbit correction, image cropping, thermal noise removal, and terrain correction using DEM data, ready for registration with the medium resolution optical image, i.e., the L1 image. The thermal noise is the noise carried by the SAR satellite system, and is due to the energy generated during the operation of the satellite itself, and the image needs to be repeatedly subjected to multiple times to remove the influence of the thermal noise. In side view imaging, the relief of the terrain causes distortion to the SAR image and causes foreshortening, eclipsing, shadowing, etc., thus requiring terrain correction. The image S2 is obtained after the processing.
The preprocessing of the medium-resolution optical L1 image mainly comprises radiometric calibration, atmospheric correction, orthorectification, image fusion, image enhancement and research area cutting. Radiometric calibration converts the gray level value (DN value) of an image into a radiance value or apparent reflectivity during image processing to eliminate errors of the sensor itself and determine the accurate radiance value at the sensor inlet. Specifically, the fusion here is only the fusion of the resolution and the spectral information, and the region of interest is cut after the fusion, and the cut boundary and size are based on S1, and finally the medium-resolution optical satellite image L2 is obtained.
And S2, performing sub-pixel level registration on the S2 and the L2, using the L2 as a basic base map, and registering the S2 to the L2 by using a frequency spectrum classification method, wherein the registration precision reaches 0.1 pixel, so that preparation is made for fusion, and fusion blurring is reduced.
To achieve sub-pixel level registration accuracy, general geographic registration is far from sufficient and requires special processing.
Firstly, pixel resampling is carried out, because the resolution of an L2 image is low, the L2 image needs to be sampled to be consistent with the S2 pixel interval, the pixel values of 4 adjacent points are used for resampling by a bilinear interpolation method, different weights are given according to the distance between the pixel values and an interpolation point, and linear interpolation is carried out. The principle is to use the pixel values of the upper left, upper right, lower left and lower right critical points corresponding to the target point to calculate the pixel value of the target point. And obtaining an image L3 after resampling is completed, wherein a specific calculation formula is as follows:
a=(1-t)(1-u),b=(1-t)×u,c=t×u,d=t(1-u)
wherein, a, b, c and d are four coordinate coefficients respectively, and u and t are the proportion of the critical point and the target point on a two-dimensional coordinate axis respectively.
The method comprises the steps of learning the mapping relation between visible light and remote sensing SAR images by using a multi-constrained GAN network, expanding the number and diversity of training samples by using a trained model, extracting features by using a neural network to perform matching prediction of image blocks, generating an offset parameter file after coarse registration, performing offset estimation, and finally removing the offset by using the trained model to obtain sub-pixel-level registration. After the registration is completed, a new SAR image and a new optical image, which are respectively defined as S3 and L4, need to be derived.
S3, carrying out fusion processing by using the registered high-resolution SAR image S2 and the medium-resolution optical image:
and S31, texture weighting and enhancing, namely highlighting the texture of the SAR image, and the principle is that secondary noise reduction is carried out, after noise reduction is finished, image reconstruction is carried out, the texture of an area with strong backscattering is enhanced, the weight ratio of the area is enhanced, the texture of an area with weak backscattering is weakened, the texture of a ground object is highlighted, and the SAR mark for enhancing texture information is S4. The formula is as follows:
Figure BDA0003688046190000061
s32, fusing the texture information of the ground object in the S4 with the spectral information of the L4, firstly, performing HSI (hue, saturation and intensity) conversion on the optical image, converting the optical image from RGB (red, green and blue) to an HSI (hue, saturation and intensity) space, and separating an intensity component (texture information) I and spectral components H and S of the image; then, histogram matching is carried out on the radar image by utilizing the separated intensity component I, so that the SAR is consistent with the histogram distribution trend of the optical image, and the spectral information is effectively kept; applying a wavelet transform algorithm to the SAR image after the intensity component of the optical image and the histogram are matched, respectively obtaining a high-frequency component and a low-frequency component of the intensity component and the SAR image, and respectively fusing the low-frequency component and the high-frequency component of the intensity component and the SAR image; and applying wavelet inverse transformation to the low-frequency fusion result and the high-frequency fusion result to obtain a new intensity component I ', and performing HSI inverse transformation on the new intensity component I' and hue component H and saturation component S separated from the optical image by HSI transformation to obtain an optical and radar fusion result of an RGB space and obtain a fused RGB image.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that various changes, modifications and substitutions can be made without departing from the spirit and scope of the invention as defined by the appended claims. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. A high-precision fusion method of an optical image and an SAR intensity image is characterized by comprising the following steps:
acquiring an SAR image S1 of a high spatial resolution X wave band in a research area and a medium spatial resolution optical image L1 of the same width shot in the same day in the same area;
performing data preprocessing on the images S1 and L1 to respectively obtain preprocessed images S2 and L2;
performing sub-pixel level registration on the images S2 and L2, using the L2 as an extrusion base map, and registering the S2 to the L2 by using a frequency spectrum classification method to reduce fusion blurring;
and performing fusion processing by using the registered high-resolution SAR image S2 to obtain optical images with resolution, normalizing the two data sources into a gray-scale image, performing HSI (high-resolution image integration) conversion on the two RGB images to obtain respective brightness components, performing wavelet conversion on the brightness components, selecting a fusion rule, obtaining new brightness components, and finally performing inverse conversion on the new brightness components to obtain the fused RGB image.
2. The method for high-precision fusion of an optical image and an SAR intensity image according to claim 1, wherein the preprocessing of S1 comprises: track correction, image cropping, thermal noise removal and terrain correction by utilizing DEM data.
3. The method for high-precision fusion of an optical image and an SAR intensity image according to claim 1, wherein the preprocessing of L1 comprises: radiometric calibration, atmospheric correction, orthorectification, image fusion, image enhancement, and region of interest cropping.
4. The method for high-precision fusion of an optical image and an SAR intensity image according to claim 1, wherein the sub-pixel level registration of the images S2 and L2 comprises the steps of:
pixel resampling is carried out, pixel values of 4 adjacent points are used for sampling the L2 image to be consistent with the S2 pixel interval by a bilinear interpolation method, and a resampled image L3 is obtained;
registering the S2 and the L3, learning the mapping relation between the visible light and the remote sensing SAR images by using a multi-constrained GAN network, expanding the number and diversity of training samples by using a trained model, extracting features by using a neural network to perform image block matching prediction, generating an offset parameter file after coarse registration, performing offset estimation, finally removing the offset by using the trained model to obtain sub-pixel level registration, and respectively deriving new SAR images and optical images S3 and L4.
5. The high-precision fusion method of the optical image and the SAR intensity image according to claim 1, wherein the image fusion adopts a texture weighting enhancement method to make the texture feature of the SAR image clearer, specifically comprising the steps of:
after noise reduction is finished, image reconstruction is carried out, the texture of the region with strong backscattering is enhanced, the weight ratio of the region is enhanced, the texture of the region with weak backscattering is weakened, therefore, the texture of the ground object is highlighted, the SAR image with enhanced texture information is marked as S4, and the formula is as follows:
Figure 528008DEST_PATH_IMAGE001
wherein,
Figure 938129DEST_PATH_IMAGE002
in order to self-define the weight coefficient,
Figure 330933DEST_PATH_IMAGE003
is an image of the region where the back-scattering is strong,
Figure 385477DEST_PATH_IMAGE004
the image of the region with weak backscattering.
CN202210651992.9A 2022-06-10 2022-06-10 High-precision fusion method for optical image and SAR (synthetic aperture radar) intensity image Pending CN115170406A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210651992.9A CN115170406A (en) 2022-06-10 2022-06-10 High-precision fusion method for optical image and SAR (synthetic aperture radar) intensity image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210651992.9A CN115170406A (en) 2022-06-10 2022-06-10 High-precision fusion method for optical image and SAR (synthetic aperture radar) intensity image

Publications (1)

Publication Number Publication Date
CN115170406A true CN115170406A (en) 2022-10-11

Family

ID=83486069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210651992.9A Pending CN115170406A (en) 2022-06-10 2022-06-10 High-precision fusion method for optical image and SAR (synthetic aperture radar) intensity image

Country Status (1)

Country Link
CN (1) CN115170406A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116452483A (en) * 2023-05-10 2023-07-18 北京道达天际科技股份有限公司 Image fusion method based on wavelet transformation and HSI color space
EP4386430A1 (en) * 2022-12-16 2024-06-19 Iceye Oy Geolocation error detection method and system for synthetic aperture radar images
CN118366059A (en) * 2024-06-20 2024-07-19 山东锋士信息技术有限公司 Crop water demand calculating method based on optical and SAR data fusion

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4386430A1 (en) * 2022-12-16 2024-06-19 Iceye Oy Geolocation error detection method and system for synthetic aperture radar images
WO2024126698A1 (en) * 2022-12-16 2024-06-20 Iceye Oy Geolocation error detection method and system for synthetic aperture radar images
CN116452483A (en) * 2023-05-10 2023-07-18 北京道达天际科技股份有限公司 Image fusion method based on wavelet transformation and HSI color space
CN118366059A (en) * 2024-06-20 2024-07-19 山东锋士信息技术有限公司 Crop water demand calculating method based on optical and SAR data fusion

Similar Documents

Publication Publication Date Title
CN115170406A (en) High-precision fusion method for optical image and SAR (synthetic aperture radar) intensity image
CN108596103B (en) High-resolution remote sensing image building extraction method based on optimal spectral index selection
Shen et al. A spatiotemporal fusion based cloud removal method for remote sensing images with land cover changes
CN110136194B (en) Snow coverage measuring and calculating method based on satellite-borne multispectral remote sensing data
Pohl et al. Review article multisensor image fusion in remote sensing: concepts, methods and applications
Hong et al. A wavelet and IHS integration method to fuse high resolution SAR with moderate resolution multispectral images
CN113920438B (en) Method for checking hidden danger of trees near power transmission line by combining ICESat-2 and Jilin image I
CN111337434A (en) Mining area reclamation vegetation biomass estimation method and system
CN110456352B (en) Glacier identification method based on coherence coefficient threshold
CN110703244B (en) Method and device for identifying urban water body based on remote sensing data
CN110032963B (en) Dynamic monitoring method for spartina alterniflora new-born plaques
CN111856459B (en) Improved DEM maximum likelihood constraint multi-baseline InSAR phase unwrapping method
CN111144350B (en) Remote sensing image positioning accuracy evaluation method based on reference base map
CN112598608A (en) Method for manufacturing optical satellite rapid fusion product based on target area
CN107688776A (en) A kind of urban water-body extracting method
CN116704369A (en) Object-oriented optical and SAR remote sensing image fusion flood extraction method and system
CN110428013B (en) Crop remote sensing classification method and system
CN114998365A (en) Ground feature classification method based on polarimetric interference SAR
CN112734819B (en) Earth surface filtering method suitable for high-resolution remote sensing satellite DSM
Re et al. Performance evaluation of 3DPD, the photogrammetric pipeline for the CaSSIS stereo images
CN116721243B (en) Deep learning atmosphere correction method and system based on spatial spectrum feature constraint
CN113779863A (en) Ground surface temperature downscaling method based on data mining
CN111178175A (en) Automatic building information extraction method and system based on high-view satellite image
Zhang et al. A general thin cloud correction method combining statistical information and a scattering model for visible and near-infrared satellite images
Smara et al. Multisource ERS-1 and optical data for vegetal cover assessment and monitoring in a semi-arid region of Algeria

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination