CN116486278B - Hour-level ozone estimation method based on space-time information mosaic - Google Patents

Hour-level ozone estimation method based on space-time information mosaic Download PDF

Info

Publication number
CN116486278B
CN116486278B CN202310447628.5A CN202310447628A CN116486278B CN 116486278 B CN116486278 B CN 116486278B CN 202310447628 A CN202310447628 A CN 202310447628A CN 116486278 B CN116486278 B CN 116486278B
Authority
CN
China
Prior art keywords
environment
ozone
image
data
estimation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310447628.5A
Other languages
Chinese (zh)
Other versions
CN116486278A (en
Inventor
蔡坤
肖一卓
李莘莘
展桂荣
乔保军
张硕
李郭宇
史建宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University
Original Assignee
Henan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University filed Critical Henan University
Priority to CN202310447628.5A priority Critical patent/CN116486278B/en
Publication of CN116486278A publication Critical patent/CN116486278A/en
Application granted granted Critical
Publication of CN116486278B publication Critical patent/CN116486278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/0004Gaseous mixtures, e.g. polluted air
    • G01N33/0009General constructional details of gas analysers, e.g. portable test equipment
    • G01N33/0027General constructional details of gas analysers, e.g. portable test equipment concerning the detector
    • G01N33/0036General constructional details of gas analysers, e.g. portable test equipment concerning the detector specially adapted to detect a particular component
    • G01N33/0039O3
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/0004Gaseous mixtures, e.g. polluted air
    • G01N33/0009General constructional details of gas analysers, e.g. portable test equipment
    • G01N33/0062General constructional details of gas analysers, e.g. portable test equipment concerning the measuring method or the display, e.g. intermittent measurement or digital display
    • G01N33/0067General constructional details of gas analysers, e.g. portable test equipment concerning the measuring method or the display, e.g. intermittent measurement or digital display by measuring the rate of variation of the concentration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Combustion & Propulsion (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Medicinal Chemistry (AREA)
  • Food Science & Technology (AREA)
  • Biophysics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The application relates to the technical field of atmospheric environment, in particular to an hour-level ozone estimation method based on space-time information mosaic, which comprises the following steps: acquiring environment detection data to form an ozone concentration monitoring data set, and performing image conversion on the environment detection data to obtain an environment monitoring data image; carrying out resolution resampling processing on the environment monitoring data image by using a three-time difference method to obtain a preferable environment image; dividing the optimized environment image to obtain an environment data sub-image, converting each row of the environment data sub-image into an environment row vector, converting each column of the environment data sub-image into an environment column vector, and further obtaining a space information matrix; forming an ozone estimation data set by using environment monitoring data images corresponding to all environment detection data in the ozone concentration monitoring data set and a corresponding space information matrix; and inputting the ozone estimation data set into a pre-constructed ozone estimation model to obtain ozone estimation result data. The application can obtain more accurate ozone estimation results.

Description

Hour-level ozone estimation method based on space-time information mosaic
Technical Field
The application relates to the technical field of atmospheric environment, in particular to an hour-level ozone estimation method based on space-time information mosaic.
Background
Satellite remote sensing has the unique advantage of space-time large-scale observation and plays an important role in atmospheric environment monitoring. And deep learning has been widely used for monitoring of atmospheric particulates. At present, the monitoring of near-ground ozone mainly depends on ground stations, and the ground stations can monitor the concentration of the polluted gas accurately in real time. Because the number of ground stations is limited, a fully covered ozone concentration data set cannot be constructed by only using the ground stations, and thus, the ozone estimation result is inaccurate. Meanwhile, in the existing ozone estimation, time information, space information and other meteorological monitoring information are often input into an estimation model in a digital form, and errors may exist, so that an ozone estimation result is inaccurate.
Disclosure of Invention
In order to solve the technical problem that an ozone estimation result is inaccurate, the application aims to provide an hour-level ozone estimation method based on space-time information mosaic, and the adopted technical scheme is as follows:
acquiring environment detection data to form an ozone concentration monitoring data set, and performing image conversion on the environment detection data to obtain an environment monitoring data image;
carrying out resolution resampling processing on the environment monitoring data image by using a three-time difference method to obtain a preferable environment image;
dividing the optimized environment image to obtain an environment data sub-image, converting each row of the environment data sub-image into an environment row vector, converting each column of the environment data sub-image into an environment column vector, and obtaining a space information matrix according to the environment row vector and the environment column vector;
forming an ozone estimation data set by using environment monitoring data images corresponding to all environment detection data in the ozone concentration monitoring data set and a corresponding space information matrix; and inputting the ozone estimation data set into a pre-constructed ozone estimation model to obtain ozone estimation result data.
Preferably, the acquiring environmental detection data constitutes an ozone concentration monitoring data set specifically:
the concentration of the troposphere ozone column, the air temperature, the air pressure, the humidity, the warp direction wind, the weft direction wind, the digital elevation model, the normalized vegetation index, the population density of the whole country, the main road data and the secondary main road data form an ozone monitoring data set.
Preferably, the method for performing resolution resampling processing on the environmental monitoring data image by using the three-time difference value specifically includes:
P(α)=a 1 (α-α (0) ) 3 +a 2 (α-α (0) ) 2 +a 3 (α-α (0) )+a 4
wherein P (alpha) represents the three-order difference function value, alpha represents the independent variable, alpha (0) Zero-order derivative value, a, representing the argument alpha 1 、a 2 、a 3 And a 4 All represent pending coefficients.
Preferably, the method for obtaining the undetermined coefficient specifically comprises the following steps:
a 3 =F (a)=P (0) )
a 4 =F(a)=P(α (0) )
wherein a is 1 、a 2 、a 3 And a 4 All represent undetermined coefficients, b represents the value of the argument α as b, F (b) =p (b), F (b)=P (b) A represents the value of the argument α as a, F (a) =p (a), F (a)=P (a)。
Preferably, the converting each row data of the environmental data sub-image into an environmental row vector and each column data of the environmental data sub-image into an environmental column vector specifically includes:
wherein x is n Representing an nth ambient row vector or ambient column vector,representing the number of columns or rows contained in the preferred ambient image, n representing the number of ambient row vectors or ambient column vectors contained in the ambient data sub-image,
preferably, the obtaining the spatial information matrix according to the environmental row vector and the environmental column vector is specifically:
and obtaining a space information matrix by multiplying the transposed environment column vector and the environment row vector.
Preferably, the ozone estimation model is a ResNet18 network model.
The embodiment of the application has at least the following beneficial effects:
according to the application, the environment monitoring data adopted in ozone estimation is obtained to form an environment monitoring data set, the environment monitoring data in the environment monitoring data set are respectively subjected to data type conversion, the environment monitoring data are converted into image data and are input into an estimation model, so that errors of estimation results are avoided, meanwhile, the image data are analyzed, space-time information of the environment monitoring data is converted into a transformation matrix, and the space information matrix is used as a wave band and is put into the input data set of the estimation model, so that ozone concentration is estimated, and a more accurate ozone estimation result can be obtained.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a method flow diagram of an hour level ozone estimation method based on temporal and spatial information mosaicing in accordance with the present application;
FIG. 2 is a graph of verification results of ozone estimation result data of the present application;
fig. 3 is a schematic structural view of an ozone estimating model of the present application.
Detailed Description
In order to further describe the technical means and effects adopted by the present application to achieve the preset purpose, the following detailed description refers to the specific implementation, structure, characteristics and effects of an hour-level ozone estimation method based on temporal and spatial information mosaic according to the present application with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
The following specifically describes a specific scheme of the hour-level ozone estimation method based on temporal and spatial information mosaic.
Examples:
referring to fig. 1, a flowchart of a method for estimating ozone in an hour level based on temporal and spatial information mosaic according to an embodiment of the application is shown, the method comprises the following steps:
step one, acquiring environment detection data to form an ozone concentration monitoring data set, and performing image conversion on the environment detection data to obtain an environment monitoring data image.
Firstly, in the existing ozone estimation method, time information comprises month, day and hour; spatial information including longitude and latitude; other weather monitoring information is often entered into the estimation model in digital form, and errors may exist, resulting in less accurate ozone estimation results. For example, for spatial information, if 20 ° N and 40 ° N are directly input into the ozone estimation model, respectively, the ozone estimation result corresponding to 40 ° N is obtained not twice as much as the ozone estimation result corresponding to 20 ° N. The time-space information variable is essentially a classification variable, which can affect the time-space correlation or the heterogeneity, but the effect cannot be measured by the corresponding classification variable, so in the embodiment of the application, the input mode of the time information of the ozone estimation model is improved.
The embodiment of the application estimates the national range hour by acquiring the concentration data, the meteorological data, the mode data and the auxiliary data of the troposphere ozone column. Specifically, a tropospheric ozone column concentration (TROPOMI), an air temperature, an air pressure, a humidity, a warp wind, a weft wind, a Digital Elevation Model (DEM), a normalized vegetation index (NDVI), a national population density (POP), a primary road data (pri) and a secondary road data (sec) are constructed into an ozone monitoring dataset.
The ground real ozone concentration data come from an environmental monitoring total station, and the acquisition format is CSV. Tropospheric ozone column concentration data were from NASA website in format NC4. Near-surface ozone concentration data is from the SILAM official website. The meteorological data is GEOS-CF mode data of the NASA website, and the data format is NC. Taking troposphere ozone column concentration data as an example, NC4 format data is obtained, and then format conversion, deletion complement, multi-track data combination and data clipping are carried out, so that the data can be finally fused with other data. The spatial resolution used for the data in this example was 0.01 ° and the data amount was tens of millions.
The environment monitoring data set contains a plurality of environment monitoring data, and further, each environment monitoring data is respectively analyzed and processed, namely, any one environment monitoring data is converted into image data to obtain an environment monitoring data image.
And secondly, performing resolution resampling processing on the environment monitoring data image by using a three-time difference method to obtain a preferable environment image.
After the environment monitoring data image corresponding to each environment monitoring data in the environment monitoring data set is obtained, the image is required to be subjected to resolution resampling processing, and a large amount of mosaics can appear when the image resolution is improved by a common image resampling processing method, so that the resolution improvement effect is poor, and the convolution feature extraction of the image is influenced to a certain extent. Therefore, in the present embodiment, the resolution resampling process is performed on the environmental monitoring data image by using the method of three differences to obtain the preferable environmental image.
The calculation formula of the three-time difference value is specifically as follows:
P(α)=a 1 (α-α (0) ) 3 +a 2 (α-α (0) ) 2 +a 3 (α-α (0) )+a 4
wherein P (α) represents the value of the three-degree difference function, i.e., the general form of a third-degree polynomial, α represents the independent variable, α (0) Zero-order derivative value, a, representing the argument alpha 1 、a 2 、a 3 And a 4 All represent pending coefficients.
The method for obtaining the undetermined coefficient can be obtained from the function values of the points a and b and the first derivative value thereof, and the calculation formula of the undetermined coefficient is specifically as follows:
a 3 =F (a)=P (0) )
a 4 =F(a)=P(α (0) )
wherein a is 1 、a 2 、a 3 And a 4 All represent undetermined coefficients, b represents the value of the argument α as b, F (b) =p (b), F (b)=P (b) A represents the value of the argument α as a, F (a) =p (a), F (a)=P (a)。
Dividing the preferable environment image to obtain an environment data sub-image, converting each row of the environment data sub-image into an environment row vector, converting each column of the environment data sub-image into an environment column vector, and obtaining a space information matrix according to the environment row vector and the environment column vector.
After resolution resampling, the input data can be fused into an image of eleven bands, which can be put into a convolutional neural network. The fused image size was 4396×8715. Specifically, in this embodiment, eleven different environmental monitoring data are acquired together to form an environmental monitoring data set, and each environmental monitoring data corresponds to one environmental monitoring data image, after resolution resampling, the eleven environmental monitoring data images can be put into different channels to be fused into one image, namely, the environmental image is preferred, and the environmental image is further preferred to be an image of eleven channels. At this time, the preferred environment image is input into the ozone estimation model, and the corresponding convolution model parameter needs to be modified, namely, the parameter channels is set to 11. Meanwhile, the implementer can set according to the type of the environmental monitoring data collected in the specific implementation scene.
At this time, for the ozone estimation model, the pixel level of the preferred environmental image corresponding to the ring monitoring data is large, so in this embodiment, the preferred environmental image is divided into sub-images with a fixed size, and the sub-images are input into the ozone estimation model, where the fixed size has a value of n×n, n is a super-parameter for adjusting the size of the input data, and in this embodiment, the value of n has a value of 3, and the implementer can set according to the specific implementation scenario. Specifically, the preferred environment image is segmented to obtain an environment data sub-image, that is, the preferred environment image is segmented into environment data sub-images with the same size, and in this embodiment, the size of the environment data sub-image is n×n.
In this embodiment, each line of data in the environment data sub-image is represented as latitude information, each column of data in the environment data sub-image is represented as longitude information, and any line of data or any column of data is converted into a vector having a length of n.
Specifically, each row of the environmental data sub-image is converted into an environmental row vector, each column of the environmental data sub-image is converted into an environmental column vector, and an nth environmental row vector or an environmental column vector is taken as an example for explanation, and the calculation formula is specifically as follows:
wherein x is n Representing an nth ambient row vector or ambient column vector,representing the number of columns or rows contained in the preferred ambient image, n representing the number of ambient row vectors or ambient column vectors contained in the ambient data sub-image,
and obtaining a space information matrix according to the environment row vectors and the environment column vectors, specifically, obtaining the space information matrix by multiplying the transposed environment column vectors by the environment row vectors, wherein the space information matrix is n x n, and placing the space information matrix corresponding to each environment data sub-image into a data set for being used as input data of an ozone estimation model.
Fourthly, forming an ozone estimation data set by the environment monitoring data images corresponding to all environment detection data in the ozone concentration monitoring data set and the corresponding space information matrix; and inputting the ozone estimation data set into a pre-constructed ozone estimation model to obtain ozone estimation result data.
In this embodiment, a residual network model of a ResNet18 network is used as an ozone estimation model, as shown in FIG. 3, FIG. 3 is a schematic structural diagram of the ozone estimation modelThe ResNet18 backbone network has 5 blocks in total, each Block can acquire feature maps of different levels, and main information in low-level features is acquired through a multi-level information fusion module respectively. In FIG. 3, month represents Month, day represents Day, lon and Lon represent longitude (longitude), lat and Lat represent latitude (latitude), V ij For the value of the ith row and the jth column in the spatial information matrix, basicblock1 and Basicblock2 are a network structure in ResNet, basicblock comprises a residual branch and a short-cut branch, and Conv is a convolution layer; linear is linear transformation function, belonging to full connection layer; BN is the BatchNorm layer and Relu is the activation function.
Further, the environment monitoring data images corresponding to all environment detection data in the ozone concentration monitoring data set and the corresponding space information matrix form an ozone estimation data set; and inputting the ozone estimation data set into a pre-constructed ozone estimation model to obtain ozone estimation result data.
Finally, the ozone estimation result data is verified, and a verification result diagram of the ozone estimation result data is shown in fig. 2. In FIG. 2, R2 represents R of the monitored value and the analog value of the ozone concentration 2 RMSE represents root mean square error, MAE represents mean absolute error, N represents the number of data at the time of verification, y represents a linear regression equation made from the result distribution, and the method used in this embodiment is the least square method. Which are all prior art, the calculation method is not described in detail in this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application and are intended to be included within the scope of the application.

Claims (5)

1. An hour-level ozone estimation method based on space-time information mosaic is characterized by comprising the following steps:
acquiring environment detection data to form an ozone concentration monitoring data set, and performing image conversion on the environment detection data to obtain an environment monitoring data image;
carrying out resolution resampling processing on the environment monitoring data image by using a three-time difference method to obtain a preferable environment image;
dividing the optimized environment image to obtain an environment data sub-image, converting each row of the environment data sub-image into an environment row vector, converting each column of the environment data sub-image into an environment column vector, and obtaining a space information matrix according to the environment row vector and the environment column vector;
forming an ozone estimation data set by using environment monitoring data images corresponding to all environment detection data in the ozone concentration monitoring data set and a corresponding space information matrix; inputting the ozone estimation data set into a pre-constructed ozone estimation model to obtain ozone estimation result data;
the converting each row of the environmental data sub-image into an environmental row vector and each column of the environmental data sub-image into an environmental column vector specifically includes:
wherein x is n Representing an nth ambient row vector or ambient column vector,represents the number of columns or rows contained in the preferred ambient image, n represents the number of ambient row vectors or ambient column vectors contained in the ambient data sub-image, +.>
The space information matrix obtained according to the environment row vector and the environment column vector is specifically:
and obtaining a space information matrix by multiplying the transposed environment column vector and the environment row vector.
2. The method for estimating ozone in an hour level based on spatiotemporal information mosaic according to claim 1, wherein said obtaining environmental detection data constitutes an ozone concentration monitoring dataset comprising:
the concentration of the troposphere ozone column, the air temperature, the air pressure, the humidity, the warp direction wind, the weft direction wind, the digital elevation model, the normalized vegetation index, the population density of the whole country, the main road data and the secondary main road data form an ozone monitoring data set.
3. The method for estimating the ozone in the hour stage based on the temporal and spatial information mosaic according to claim 1, wherein the method for performing resolution resampling processing on the environmental monitoring data image by using the three differences is specifically as follows:
P(α)=a 1 (α-α (0) ) 3 +a 2 (α-α (0) ) 2 +a 3 (α-α (0) )+a 4
wherein P (alpha) represents the three-order difference function value, alpha represents the independent variable, alpha (0) Zero-order derivative value, a, representing the argument alpha 1 、a 2 、a 3 And a 4 All represent pending coefficients.
4. The method for estimating the ozone in the hour level based on the temporal and spatial information mosaic according to claim 3, wherein the method for obtaining the undetermined coefficient is specifically as follows:
a 3 =F (a)=P (0) )
a 4 =F(a)=P(α (0) )
wherein a is 1 、a 2 、a 3 And a 4 All represent undetermined coefficients, b represents the value of the argument α as b, F (b) =p (b), F (b)=P (b) A represents the value of the argument α as a, F (a) =p (a), F (a)=P (a)。
5. The method for estimating ozone in hour level based on spatiotemporal information mosaic according to claim 1, wherein said ozone estimation model is a ResNet18 network model.
CN202310447628.5A 2023-04-24 2023-04-24 Hour-level ozone estimation method based on space-time information mosaic Active CN116486278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310447628.5A CN116486278B (en) 2023-04-24 2023-04-24 Hour-level ozone estimation method based on space-time information mosaic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310447628.5A CN116486278B (en) 2023-04-24 2023-04-24 Hour-level ozone estimation method based on space-time information mosaic

Publications (2)

Publication Number Publication Date
CN116486278A CN116486278A (en) 2023-07-25
CN116486278B true CN116486278B (en) 2023-11-21

Family

ID=87216252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310447628.5A Active CN116486278B (en) 2023-04-24 2023-04-24 Hour-level ozone estimation method based on space-time information mosaic

Country Status (1)

Country Link
CN (1) CN116486278B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108836A (en) * 2017-12-15 2018-06-01 清华大学 A kind of ozone concentration distribution forecasting method and system based on space-time deep learning
CN113189014A (en) * 2021-04-14 2021-07-30 西安交通大学 Ozone concentration estimation method fusing satellite remote sensing and ground monitoring data
CN114897250A (en) * 2022-05-22 2022-08-12 浙江农林大学 CNN-GRU ozone concentration prediction model building method, prediction method and model integrating space and statistical characteristics

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102507630B (en) * 2011-11-30 2013-05-08 大连理工大学 Method for forecasting oxidation reaction rate constant of chemical substance and ozone based on molecular structure and environmental temperature

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108836A (en) * 2017-12-15 2018-06-01 清华大学 A kind of ozone concentration distribution forecasting method and system based on space-time deep learning
CN113189014A (en) * 2021-04-14 2021-07-30 西安交通大学 Ozone concentration estimation method fusing satellite remote sensing and ground monitoring data
CN114897250A (en) * 2022-05-22 2022-08-12 浙江农林大学 CNN-GRU ozone concentration prediction model building method, prediction method and model integrating space and statistical characteristics

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BP 神经网络的近地面臭氧估算及时空特征分析;李紫微 等;《测绘通报》(第6期);第1-6页 *
Spatial–Temporal Variations in NO2 and PM2.5 over the Chengdu–Chongqing Economic Zone in China during 2005–2015 Based on Satellite Remote Sensing;Kun Cai 等;《sensors》;第1-16页 *

Also Published As

Publication number Publication date
CN116486278A (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN112836610B (en) Land use change and carbon reserve quantitative estimation method based on remote sensing data
CN109447260B (en) Local numerical weather forecast product correction method based on deep learning
CN113297527B (en) PM based on multisource city big data 2.5 Overall domain space-time calculation inference method
CN114019579B (en) High space-time resolution near-surface air temperature reconstruction method, system and equipment
CN111192282B (en) Lake and reservoir time sequence water level reconstruction method for lakeside zone virtual station
CN115062527B (en) Geostationary satellite sea temperature inversion method and system based on deep learning
Yoo et al. Spatial downscaling of MODIS land surface temperature: Recent research trends, challenges, and future directions
CN113108918B (en) Method for inverting air temperature by using thermal infrared remote sensing data of polar-orbit meteorological satellite
CN113935249B (en) Upper-layer ocean thermal structure inversion method based on compression and excitation network
Liu et al. Hyperspectral infrared sounder cloud detection using deep neural network model
CN116504330A (en) Pollutant concentration inversion method and device, electronic equipment and readable storage medium
CN114595876A (en) Regional wind field prediction model generation method and device and electronic equipment
CN112598590B (en) Optical remote sensing time series image reconstruction method and system based on deep learning
CN116486278B (en) Hour-level ozone estimation method based on space-time information mosaic
CN111177652B (en) Spatial downscaling method and system for remote sensing precipitation data
CN107576399A (en) Towards bright the temperature Forecasting Methodology and system of MODIS forest fire detections
CN116307070A (en) Prediction method for aging speed change of marsh vegetation canopy
CN112990609B (en) Air quality prediction method based on space-time bandwidth self-adaptive geographical weighted regression
CN115452167A (en) Satellite remote sensor cross calibration method and device based on invariant pixel
CN115222837A (en) True color cloud picture generation method and device, electronic equipment and storage medium
CN110222301B (en) Surface solar short wave radiation calculation method under haze condition
CN111539455B (en) Global ionosphere electron total content prediction method based on image primary difference
Yue et al. Spatiotemporal variations in surface albedo during the ablation season and linkages with the annual mass balance on Muz Taw Glacier, Altai Mountains
CN114169215A (en) Surface temperature inversion method coupling remote sensing and regional meteorological model
CN111695530A (en) River water replenishing effect intelligent monitoring and evaluation method based on high-resolution remote sensing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant