CN112732852B - Cross-platform space-time big data distributed processing method and system - Google Patents

Cross-platform space-time big data distributed processing method and system Download PDF

Info

Publication number
CN112732852B
CN112732852B CN202011643656.7A CN202011643656A CN112732852B CN 112732852 B CN112732852 B CN 112732852B CN 202011643656 A CN202011643656 A CN 202011643656A CN 112732852 B CN112732852 B CN 112732852B
Authority
CN
China
Prior art keywords
data
spatial
space
input
spatialtransform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011643656.7A
Other languages
Chinese (zh)
Other versions
CN112732852A (en
Inventor
乐鹏
王翰诚
梁哲恒
姜良存
姜福泉
魏汝兰
吴宝佑
李皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South Digital Technology Co ltd
Wuhan University WHU
Original Assignee
South Digital Technology Co ltd
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South Digital Technology Co ltd, Wuhan University WHU filed Critical South Digital Technology Co ltd
Priority to CN202011643656.7A priority Critical patent/CN112732852B/en
Publication of CN112732852A publication Critical patent/CN112732852A/en
Application granted granted Critical
Publication of CN112732852B publication Critical patent/CN112732852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/254Extract, transform and load [ETL] procedures, e.g. ETL data flows in data warehouses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Abstract

The invention relates to a cross-platform space-time big data distributed processing method and software, and provides a cross-platform space-time big data management method on the basis of reusing the kernel of a traditional geographic information system, wherein an Apache Beam model is used for efficiently storing space data, so that a user is prevented from respectively compiling data management programs on different distributed computing platforms, and the development efficiency is greatly improved; an improved distributed spatial data parallel processing method is provided, and the parallelization of spatial analysis algorithms which need to process a plurality of input point elements simultaneously, such as interpolation analysis, density analysis and the like, is compatible on the basis of a method for parallel processing of non-spatial data provided by Apache Beam. The method avoids the user from writing own spatial data processing algorithm, enables the parallelization of the spatial analysis algorithm which needs to process a plurality of input point elements simultaneously to be possible, and can efficiently process and analyze massive spatial data.

Description

Cross-platform space-time big data distributed processing method and system
Technical Field
The invention belongs to the field of geographic information systems, and relates to a cross-platform space-time big data distributed processing method and software.
Background
Due to the popularization of low-cost and ubiquitous positioning technology, the acquisition of massive spatial data becomes easier and easier. With the development of geographic information systems in a data crowdsourcing mode, such as OpenStreetMap, individual users increasingly contribute to massive spatial information. The value is obtained by analyzing massive spatial data, so that further guidance of decision making becomes the key of success of commercial and scientific research.
In order to efficiently store and compute mass data, a platform-independent distributed computing framework has emerged. Apache Beam is a programming model of big data contributed by Google corporation, defines a programming paradigm and an interface of data processing, does not relate to the implementation of a specific execution engine, and a data processing program developed based on Beam can be executed on a mainstream distributed computing platform. At present, Spark, flex and Apex provide support for batch processing and stream processing, and the support of Storm is also under development. However, Apache Beam does not support spatial data and operations. Therefore, the user needs to write the spatial data processing algorithm of the user on the Apache Beam, and the work is complicated. Moreover, Apache Beam supports parallel processing of only a single element in a set of input elements, but does not support spatial analysis algorithms that require simultaneous processing of multiple input point elements, such as interpolation analysis and density analysis.
There are also some efforts in processing spatial data. GeoTools is an open source Java GIS toolkit that can be used to develop standards compliant geographic information systems. GeoTools defines many interfaces for spatial concepts and data structures, and one of the implementations of the interfaces is JTS Topology Suite (JTS). JTS is a set of open source Java APIs by Vivid Solutions, Inc. of Canada. It provides a set of kernel algorithms for spatial data manipulation. Which provides operations to compute buffering, distance, line segment fusion, graph overlay, polygonization, assertion, association, validity, etc. of a graph. All operations are directed to the Geometry object defined in the get package. However, JTS does not support distributed computing. Therefore, users need to write algorithms on different distributed computing platforms respectively, and development efficiency is greatly reduced.
With the application of more and more big data platforms in actual production, a space-time big data distributed processing model which is free from algorithm multiplexing and algorithm migration and supports spatial data and operation still needs to be realized.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a cross-platform space-time big data distributed processing method.
The technical scheme of the invention is a cross-platform space-time big data distributed processing method, which comprises the following steps:
step 1, the method realizes definition of SpatialPipeline, namely a spatial parallel processing pipeline, based on Apache Beam, and encapsulates the whole spatial data processing task from the beginning to the end, wherein the task comprises reading input data, converting data and writing output data. These pipelines all provide a set of SDKs for a specific language to build the pipeline (supported languages include but are not limited to Java, Python, etc.) and to execute the pipeline using Runner for a specific runtime environment (supported runtime environments include but are not limited to Spark, Flink, Cloud data flow, etc.). It contains a series of SpatialCollection, SpatialTransform, and I/O operations, which constitute a complete set of computational logic. The most critical of these is to build the pipeline options object, which is the pipeline configuration option, which can be read from the command line to configure SpatialPipeline. Different levels of SpatialPipeline are configured using pipeline operations, such as the configuration of input data and execution modes of SpatialPipeline. The method includes the steps that the PipelineOptions comprise a database ip address, a database port number, a database name, a database user name, a database password, a data table name and parameters required by space analysis;
and step 2, supporting a plurality of data sources to read data, including postgreSQL database data, TSV data, Shapefile data and the like. For each data source, a Read interface is designed for the data source respectively for reading in the data. For example, for PostgreSQL database data, the PostgreRead interface is used for data entry;
and step 3, packaging the data into a SpatialCollection, namely a spatial parallel data set, which comprises a plurality of parallel sets of spatial data sets and can be input data or output data of SpatialTransform in the spatial data processing stream. In order to read the geometric information of the elements and the attribute information of the elements, the method designs PCollection < SimpleFeture >, namely a spatial parallel data set with a generalized SimpleFeture. SimpleFeatur is an interface provided by GeoTools, each SimpleFeatur represents a spatial element, and is used in a similar manner to a key-value pair, and can query attribute values through keys, a key list being provided by its field FeatureType (element type), and attribute values being provided by the values field of an Object array type;
and 4, performing spatial analysis by a method of packaging in SpatialTransform. The SpatialTransform, i.e., each step in the pipeline, receives one or more input SpatialPipeline, processes it, and outputs SpatialPipeline, i.e., a component of the encapsulated processing logic. For a space analysis algorithm such as interpolation analysis and density analysis which needs to process a plurality of input point elements simultaneously, the method provides an improved parallel processing method. That is, in the processing process, the minimum wrapping rectangle of the input point element is divided into a plurality of grids according to the gridFeatureNum (i.e. the number of elements in the grid) parameter input by the user, and gridFeatureNum point elements are stored in each grid. Then, the SpatialTransform takes the point elements in each grid as a whole to be delivered to the slave nodes of the distributed computing platform for parallel processing, and stores the generated point elements into a new PCollection < Simplefeature >;
and step 5, supporting the output of data to various data sources, including postgreSQL database data, TSV data, Shapefile data and the like. For each data source, the invention designs a Write interface for outputting data. For example, for PostgreSQL database data, the PostgreWrite interface is used for data output. Finally, SpatialPipeline is run using Pipeline Runner for the particular runtime environment.
Also, in step 3, reading data from an external source requires the use of the I/O adapter provided by Beam. The exact usage of the adapters varies, but all adapters read from some external data source and return a SpatialCollection whose elements represent the data records in that source. Each data source adapter has a Read transformation; to Read data, this Read conversion must be applied to the SpatialPipeline object itself.
Also, in step 4, in order to call a SpatialTransform, the SpatialTransform must be applied to the input SpatialCollection. Invoking multiple Beam SpatialTransform functions is similar to a method chain, with some differences: the user applies a SpatialTransform to the input SpatialCollection, which passes as a parameter to the input apply function, which returns an output SpatialCollection.
The invention has the advantages that:
(1) on the basis of managing non-spatial data provided by Apache beams, a traditional geographic information system kernel is compatible, such as GeoTools, namely in step 3, spatial data are packaged into spatial Collection based on an interface Simplefeature provided by GeoTools and a PCcollection data structure of the Apache beams, and the spatial data can be automatically converted into a data structure required by a corresponding distributed computing platform during program running;
(2) an improved distributed spatial data parallel processing method is provided, and the parallelization of spatial analysis algorithms which need to process a plurality of input point elements simultaneously, such as interpolation analysis, density analysis and the like, is compatible on the basis of a method for parallel processing of non-spatial data provided by Apache Beam. That is, in step 4, the minimum bounding rectangle of the input point element is divided into several grids, and SpatialTransform passes the point elements in each grid as a whole to the slave nodes of the distributed computing platform for parallel processing. Compared with the prior art, the method avoids the user from writing own spatial data processing algorithm, enables parallelization of the spatial analysis algorithm which needs to process a plurality of input point elements simultaneously to be possible, and can efficiently process and analyze massive spatial data.
Drawings
FIG. 1 is an overall implementation architecture diagram of an embodiment of the present invention.
Fig. 2 is a flow chart of the simplest SpatialPipeline linear operation flow of an embodiment of the present invention.
Fig. 3 is a SpatialPipeline flow chart of PostgreSQL data coordinate transformation according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples for the purpose of facilitating understanding and practice of the invention by those of ordinary skill in the art, and it is to be understood that the embodiments described herein are merely illustrative and explanatory of the invention and are not restrictive thereof.
The invention provides a cross-platform space-time big data distributed processing method, which expands the core of an Apache beam to support the type and the space analysis of space data, designs and realizes a cross-platform space-time big data distributed processing model to process the big space data, and provides a group of SpatialTransform (space parallel transformation) to support the space analysis operation.
The general implementation architecture of the embodiment is shown in figure 1. The method is based on the Beam programming model, simplifies the mechanism of large-scale spatial scale distributed data processing, and abstracts the mechanism into spatial pipeline, spatial collection and spatial transform, namely spatial parallel pipeline, spatial parallel set and spatial parallel conversion. The method consists of seven main parts, and is mainly organized in a project package structure. The first part is the Apache Beam model. One of the classes of the Beam SDK provides drivers that support the running of Beam, including Pipeline Runner, which determines on which distributed computing platform (e.g., Spark, Flink) the spatialPipeline will run. The second part is an fn packet, i.e., a basic computation packet. The third part is a Ptransform packet, i.e., a packet of data processing operations, which provides spatially distributed operators depending on the fn packet. The fourth part is an io packet, i.e., a data read-write packet. The fifth part is a PCollection packet, i.e., a translation data manipulation packet. The sixth part is a Pipeline packet, and the Pipeline packet encapsulates the data processing process of each tool and is directly called by a corresponding tool in the App. The seventh part is an app package, i.e., an application package, which encapsulates fixed GIS tools provided outside the platform workflow pair. The operation flow of the method can be summarized as follows: creating and executing a spatialPipeline object, calling corresponding IO read-in data to spatialCollection, calling a corresponding method in a Pipeline package to realize a core operation step, and finally writing out a result spatialCollection through IO. The general implementation architecture diagram of the method is shown in fig. 1, and comprises the following steps:
step 1: the method is based on Apache Beam, realizes the definition of spatialPipeline, namely a space parallel processing pipeline, and encapsulates the whole space data processing task from beginning to end, wherein the process comprises the steps of reading input data, converting data and writing output data. These pipelines all provide a set of SDKs for a specific language to build a pipeline (supported languages include but are not limited to Java, Python, etc.) and execute the pipeline using Runner for a specific runtime environment (supported runtime environments include but are not limited to spare for batch processing, Flink for stream processing, etc.). It contains a series of SpatialCollection, SpatialTransform and I/O operations that form a complete set of computational logic.
Referring to fig. 2, the method first requires creating an example of SpatialPipeline. When creating SpatialPipeline, some configuration options, namely pipeline options, also need to be set. The user may set the configuration options for the pipeline by entering parameters and pass them to the SpatialPipeline object when the object is created. By reading the PipelineOptions object, aspects such as input data and execution mode of spatialPipeline can be configured according to parameters in the PipelineOptions object. Then, by performing SpatialTransform (spatial parallel conversion) on the input SpatialCollection (spatial parallel collection), output data required by the user can be obtained.
Referring to fig. 3, the present invention introduces the operation flow of SpatialPipeline by using SpatialPipeline of coordinate transformation of vector data as an embodiment. The interface performs the following operations:
first, a pipeline options object, i.e., pipeline configuration options, is constructed, and the options for configuring the pipeline are read from the command line. The different levels of the pipeline are configured using the pipeline option, such as the pipeline running program for the pipeline and any running program specific configuration required by the selected running program will be executed. The pipeline options of the invention comprise the ip address of the database, the port number of the database, the name of the database, the user name of the database, the password of the database, the name of a data table and parameters required by space analysis.
Then, the input data is read, the read transform reads the data from an external source and returns a SpatialCollection representation of the data for use by the pipeline. Data is read from an external source using the I/O adapter provided by Beam. The exact usage of the adapters varies, but all adapters read from some external data source and return a SpatialCollection whose elements represent the data records in that source. Each data source adapter has a Read transformation; to read, the transformation must be applied to the Pipeline object itself. The invention reads and returns the SpatialCollection with the element of Simple Feature type from the PostgreSQL database, and each Simple Feature represents a space element.
Further, SpatialTransform was applied. To call a SpatialTransform, SpatialTransform must be applied to the incoming SpatialCollection. Invoking multiple Beam transfer functions is similar to a chain of methods, with some differences: the user applies a SpatialTransform to the input SpatialCollection, which will pass as a parameter to the input application function, which will return an output SpatialCollection.
And finally, writing output data, writing data in PCollection into an external data source by write conversion, and operating the Pipeline by using the specified Pipeline Runner.
Step 2: and designing an interface supporting a plurality of data sources to read in data, wherein the data sources comprise PostgreSQL database data, TSV data, Shapefile data and the like. For each data source, a Read interface is designed for the data source respectively for reading in the data. For example, for PostgreSQL database data, the PostgreRead interface is used for data entry;
for example, when data is read from the PostgreSQL database, since input data must be serialized to allow operations to be performed in Beam, the data read from the database must be converted into a serializable string list object for further operations. GeoTools provides a get data source method for reading data table information into JDBCDataStore format according to database connection parameters. In addition, an element data source FeatureSource is obtained through the data source, and an element set SimpleFeatureCollection in the element data source is read. And then, storing the attribute information and the geometric information of each SimpleFeatur in a character string list by traversing the SimpleFeatureCOLLECTIONATION object.
And step 3: data is packaged as a SpatialCollection, a spatially parallel data set, which includes many parallel sets of spatial data sets and may be input data or output data of a SpatialTransform in a spatial data processing stream. In order to read the geometric information of the elements and the attribute information of the elements, the method designs PCollection < SimpleFeture >, namely a spatial parallel data set with a generalized SimpleFeture. Simple Feature is an interface provided by GeoTools, i.e. a space element, which is used in a similar way to key-value pairs and can query attribute values through keys, a key list is provided by its field FeatureType (element type), and attribute values are provided by the values field of Object array type;
the invention considers that elements of a spatialCollection data set not only contain geometric information but also contain attribute information, designs PCollection < SimpleFeture >, namely a spatial parallel data set with a generalized SimpleFeture (spatial element), and has the following construction process. Firstly, an element type FeatureType is obtained through the feature source obtained in the step 2, the element type FeatureType is converted into an index field of a graph data type with keys as character strings and a shaped value, the index field is used for storing the field names of the elements and the sequence numbers of the fields stored in the values field in SimpleFeature, and SimpleFeature is created through the index field. And then reading the geometric and attribute fields of the shapefile data from the database, storing the geometric and attribute fields into a memory String list, reconstructing PCollection < String >, and defining a decoder for the data set as a String decoder StringUtf8 Coder. And then converting the WKT geometric information in the character string list into a Geometry geometric object defined in JTS by a WKT (a geometric information data type) parser WKTreader provided by JTS in spatialTransform, and finally assigning the attribute field and the Geometry object in the character string list as values of SimpleFeture by a construction method, thereby constructing PCollection < SimpleFeture >.
And 4, step 4: spatial analysis was performed by the method of encapsulation in SpatialTransform. The SpatialTransform, i.e., each step in the pipeline, receives one or more input SpatialPipeline, processes it, and outputs SpatialPipeline, i.e., a component of the encapsulated processing logic. For a space analysis algorithm such as interpolation analysis and density analysis which needs to process a plurality of input point elements simultaneously, the method provides an improved parallel processing method. That is, in the processing process, the minimum wrapping rectangle of the input point element is divided into a plurality of grids according to the gridFeatureNum (namely the number of elements in the grid) parameter input by the user, and gridFeatureNum point elements are stored in each grid. Then, the SpatialTransform delivers the point elements in each grid to the slave nodes of the distributed computing platform for parallel processing, and stores the generated point elements into a new PCollection < Simplefeature >;
to call a SpatialTransform, SpatialTransform must be applied to the incoming SpatialCollection. Invoking multiple Beam SpatialTransform functions is similar to a method chain, with some differences: the user applies a SpatialTransform to the input SpatialCollection, which passes as a parameter to the input apply function, which returns an output SpatialCollection.
And 5: output of data to a variety of data sources is supported, including PostgreSQL database data, TSV data, and sharefile data, among others. For each data source, the invention designs a Write interface for outputting data. For example, for PostgreSQL database data, the PostgreWrite interface is used for data output. Finally, SpatialPipeline is run using the Pipeline Runner for the particular runtime environment.
For example, when writing out data to the result table of the PostgreSQL database, the attribute values in the SimpleFeature may be queried and written into the database by keys according to the key list provided by FeatureType.
When the method is specifically implemented, the processes can be automatically operated by adopting a computer software technology.

Claims (5)

1. A cross-platform space-time big data distributed processing method is characterized by comprising the following steps:
step 1, creating a space parallel processing pipeline object on an Apache Beam model, setting a configuration option, and transmitting the configuration option to the space parallel processing pipeline object;
step 2, designing interfaces for reading in various data sources;
step 3, packaging the data into a spatial parallel data set, wherein the spatial parallel data set comprises a plurality of parallel sets of spatial data sets and comprises input data or output data of SpatialTransform in a spatial data processing flow; designing PCollection < SimpleFeture >, wherein each SimpleFeture represents a space element, the use mode is similar to a key value pair, the attribute value is inquired through a key, a key list is provided by a field FeatureType, and the attribute value is provided by a values field of an Object array type, so that when the element is read, the geometric information of the required number can be read, and the attribute information of the element can also be read;
the specific process for designing the PCollection < SimpleFeatur > comprises the following steps: firstly, converting read-in data into a character string, wherein the value of the character string is an index field of a shaped graph data type, the field name of a storage element and the sequence number of the field stored in a value field in SimpleFeature are used for storing the element, and the SimpleFeature is created by the field name and the field; then, reading the geometric and attribute fields of the shapefile data from the database, storing the geometric and attribute fields into a memory String list, then constructing PCollection < String >, and defining a decoder code as a String decoder StringUtf8Coder for the data set; then converting WKT geometric information in the character string list into a Geometry geometric object defined in JTS through a WKT analyzer WKTreader provided by JTS in PTransform, and finally assigning attribute fields and the Geometry object in the character string list into values of SimpleFeatures through a construction method, thereby constructing PCollection < SimpleFeatures >;
step 4, performing spatial analysis by a method of encapsulation in SpatialTransform, wherein the minimum outer-wrapping rectangle of the input point element is divided into a plurality of grids according to the girdFeatureNum parameter input by a user, and gridFeatureNum point elements are stored in each grid; then, the SpatialTransform delivers the point elements in each grid to the slave nodes of the distributed computing platform for parallel processing, and stores the generated point elements into a new PCollection < Simplefeature >;
and 5, designing the output data of the Write interface corresponding to the output of various data sources.
2. The method of claim 1, wherein:
in the step 1, firstly, an instance of a space parallel processing pipeline needs to be created, when the space parallel processing pipeline is created, configuration options are set, a user sets the configuration options of the pipeline in a parameter input mode, and when an object is created, the configuration options are transmitted to a space parallel processing pipeline object; configuring input data and an execution mode of a space parallel processing pipeline according to parameters in the configuration option object by reading the configuration option object; then, by performing SpatialTransform on the inputted SpatialCollection, output data required by the user can be obtained.
3. The method of claim 1, wherein:
the data in step 2 comprises PostgreSQL database data, TSV data and Shapefile data.
4. The method of claim 3, wherein: when call is made to the Beam spatialTransform functions, the user applies a spatialTransform to the input spatialCollection, which passes as a parameter to the input apply function, which returns an output spatialCollection.
5. A system for realizing the cross-platform spatio-temporal big data distributed processing method of any one of claims 1 to 4, characterized in that: the system comprises a pipeline creating module, a data input module, a data packaging module and a space analysis module;
the pipeline creating module is used for creating a space parallel processing pipeline object on the Apache Beam model, setting a configuration option and transmitting the configuration option to the space parallel processing pipeline object;
the data input module reads data with various data sources;
the data packaging module is used for packaging data into a spatial parallel data set, comprises a parallel set of a plurality of spatial data sets and is used as input data or output data in the spatial analysis module;
and the spatial analysis module performs spatial analysis by a method packaged in a SpatialTransform.
CN202011643656.7A 2020-12-31 2020-12-31 Cross-platform space-time big data distributed processing method and system Active CN112732852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011643656.7A CN112732852B (en) 2020-12-31 2020-12-31 Cross-platform space-time big data distributed processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011643656.7A CN112732852B (en) 2020-12-31 2020-12-31 Cross-platform space-time big data distributed processing method and system

Publications (2)

Publication Number Publication Date
CN112732852A CN112732852A (en) 2021-04-30
CN112732852B true CN112732852B (en) 2022-09-13

Family

ID=75609338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011643656.7A Active CN112732852B (en) 2020-12-31 2020-12-31 Cross-platform space-time big data distributed processing method and system

Country Status (1)

Country Link
CN (1) CN112732852B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115269744A (en) * 2022-07-25 2022-11-01 中化现代农业有限公司 Agricultural geographic data visualization method and device, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246516B (en) * 2013-05-16 2017-02-08 中国科学院计算机网络信息中心 Internet-based remote sensing data analysis tool packaging service method
WO2017115899A1 (en) * 2015-12-30 2017-07-06 ㈜리얼타임테크 In-memory database system having parallel processing-based moving object data computation function and method for processing the data
CN106469223B (en) * 2016-09-23 2018-07-03 交通运输部规划研究院 The space of compatible ArcGIS a kind of and the unified control method and system of attribute data
CN107544948B (en) * 2017-07-12 2019-12-06 中国农业大学 Vector file conversion method and device based on MapReduce
CN110597935A (en) * 2019-08-05 2019-12-20 北京云和时空科技有限公司 Space analysis method and device
CN112000312B (en) * 2020-07-24 2022-04-29 湖北地信科技集团股份有限公司 Space big data automatic parallel processing method and system based on Kettle and GeoTools

Also Published As

Publication number Publication date
CN112732852A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
US10409560B1 (en) Acceleration techniques for graph analysis programs
JP4669788B2 (en) Restriction condition solving method, restriction condition solving apparatus, and restriction condition solving system
CN107704382B (en) Python-oriented function call path generation method and system
US20060282452A1 (en) System and method for mapping structured document to structured data of program language and program for executing its method
US11667033B2 (en) Systems and methods for robotic process automation
WO2016163901A1 (en) An apparatus for processing an abstract syntax tree being associated with a source code of a source program
US20190220387A1 (en) Unexplored branch search in hybrid fuzz testing of software binaries
CN105550268A (en) Big data process modeling analysis engine
CN110825385B (en) Method for constructing read Native offline package and storage medium
JP2001166949A (en) Method and device for compiling source code by using symbolic execution
CN113177034B (en) Cross-platform unified distributed graph data processing method
CN112732852B (en) Cross-platform space-time big data distributed processing method and system
CN115562629A (en) RPA flow representation method, system, device and storage medium
CN114356964A (en) Data blood margin construction method and device, storage medium and electronic equipment
CN111190587A (en) Method and system for automatically generating engineering front-end code based on JDBC
US11573777B2 (en) Method and apparatus for enabling autonomous acceleration of dataflow AI applications
CN113608748A (en) Data processing method, device and equipment for converting C language into Java language
CN114547206A (en) Data synchronization method and data synchronization system
CN113885844A (en) Business service arranging method and related device
CN107562430B (en) Compiling method for file processing function of mobile pi-calculus language
CN116738900B (en) Transcoding device and method for intellectual property block
CN109062556A (en) A kind of function programming system of more return values
CN117075912B (en) Method for program language conversion, compiling method and related equipment
CN114691715A (en) ANTLR-based data acquisition method and device, electronic equipment and storage medium
Somogyi et al. Towards a Model Transformation based Code Renovation Tool.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant