CN114968984A - Digital twin full life cycle management platform - Google Patents

Digital twin full life cycle management platform Download PDF

Info

Publication number
CN114968984A
CN114968984A CN202210658459.5A CN202210658459A CN114968984A CN 114968984 A CN114968984 A CN 114968984A CN 202210658459 A CN202210658459 A CN 202210658459A CN 114968984 A CN114968984 A CN 114968984A
Authority
CN
China
Prior art keywords
data
unit
layer
analysis
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210658459.5A
Other languages
Chinese (zh)
Inventor
魏兴刚
刘旭阳
戎廷恩
郑相
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qihang Digital Technology Co ltd
Original Assignee
Shanghai Qihang Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qihang Digital Technology Co ltd filed Critical Shanghai Qihang Digital Technology Co ltd
Priority to CN202210658459.5A priority Critical patent/CN114968984A/en
Publication of CN114968984A publication Critical patent/CN114968984A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/211Schema design and management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2282Tablespace storage structures; Management thereof
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses a digital twin full-life-cycle management platform which comprises a basic layer, a data layer, a supporting platform layer, a business application layer and a user layer, wherein the basic layer is used for providing a hardware environment and a network environment for the operation of the whole platform, the basic layer comprises a server, a storage system, a backup disaster recovery module, an operating system and network equipment, the data layer is used for monitoring and detecting various data, and the data layer comprises a business database and a model database. The invention discloses a digital twin full-life-cycle management platform, which is based on the existing computer hardware and network platform, establishes application service and supervision systems on objects such as enterprises and the like, forms a series relation among business data, and finally forms an intelligent platform integrating supervision service display and remote monitoring data dynamic update so as to design a data resource center on the basis of determining the content of data resources.

Description

Digital twin full life cycle management platform
Technical Field
The invention belongs to the technical field of data management, and particularly relates to a digital twin full-life-cycle management platform.
Background
With the development of Digital manufacturing (Digital manufacturing), Digital twins (Digital twins) have been produced. The digital twin body is composed of a physical product and a virtual product, and seamless data exchange is performed between the physical product and the virtual product by using an information technology, a sensor technology and the like. And the virtual product is updated in real time according to the actual state of the physical product so as to realize state monitoring and feedback of the physical product. Given the high fidelity nature of virtual products, virtual product-based data analysis may be used to improve the performance of physical products, predict failure conditions of physical products, and the like.
However, the digital application degree of the existing main means for monitoring objects such as production enterprises is not high, the overall management of the whole life cycle of the enterprises cannot be carried out, and particularly, the establishment of a database has great loopholes and defects.
Therefore, the above problems are further improved.
Disclosure of Invention
The invention mainly aims to provide a digital twin full-life-cycle management platform which is based on the existing computer hardware and network platform, establishes application service and supervision systems on objects such as enterprises and the like, forms a series relation among business data, and finally forms an intelligent platform integrating supervision service display and remote monitoring data dynamic update so as to design a data resource center on the basis of determining the content of data resources, thereby carrying out full-life-cycle management on the main body of the object by the enterprise.
Another objective of the present invention is to provide a digital twin full-life-cycle management platform, which is characterized in that the database is developed to be developed in the form of interface, call, etc. to build a database conforming to the sharing standard by combing and analyzing the existing data, according to the practical requirements, and following the relevant database standards, to build a data directory, metadata, etc., to form a data resource system of spatial basic information, to obtain three-dimensional live-action data, to build a three-dimensional live-action model, to build a two-three-dimensional integrated spatial database, to form a three-dimensional "one picture" of the resource.
In order to achieve the above object, the present invention provides a digital twin full lifecycle management platform for full lifecycle management, comprising a base layer, a data layer, a support platform layer, a service application layer and a user layer, wherein:
the base layer is used for providing a hardware environment and a network environment for the whole platform to run, and comprises a server, a storage system, a backup disaster recovery module, an operating system and network equipment;
the data layer is used for monitoring and detecting various types of data and comprises a business database (provided with business data) and a model database (provided with model data);
the supporting platform layer is used for supporting development and operation of the whole platform (including CESUM development and operation platform, mongodb database platform, development platform and operation platform and other various basic platforms);
the service application layer is used for integrating all service systems and enabling the service systems to communicate with each other, thereby supporting seamless switching among a plurality of service systems and supporting common application and analysis service in the service systems;
and the user layer is used for each user to perform platform login in a single-point mode, so that the operation is performed according to the authority range of the current user.
As a further preferable embodiment of the above-described technical means, the construction (requirement) of the database in the data layer includes:
the data resource system construction module is used for combing data resources, analyzing the levels, categories and relations among various types of data, uniformly planning the data resources, formulating a uniform data resource coding and classifying system and establishing a space basic information data directory to form a space data resource system;
the data resource directory system construction module is used for uniformly planning data resources on the basis of collecting and analyzing data information of each node, and respectively establishing a main node data resource system and a sub-node data resource system according to uniform data resource coding and classification;
the unified data model design module organizes and describes information by taking the space object as a core according to unified definition, and codes the space object according to a unified rule so as to form an object entity model and a relation model, and further is used for describing the space characteristics, the service characteristics, the logic key and the service behavior of the object;
the data logic model module comprises an entity object model and a data relation model;
the data analysis module is used for analyzing various existing stock data by combining a data library building target and a construction requirement, figuring out the data quality and the defect condition, analyzing the difference between the current data and the target data in structure, content and form so as to combine various available resources, confirm the work content, the approximate workload and the basic work method which need to be finished to achieve the library building target, and arrange to form a data analysis report;
the map vectorization module is used for vectorizing a part of image results to build a database by combining data application requirements, so that data query, analysis and statistics are supported;
the data standardization conversion module is used for converting the attribute data and the graphic data of the existing data result so as to ensure that the multi-metadata result forms basic data meeting the requirement of a target structure;
the map symbolization module is used for setting element display symbols under different display scales;
the data detection and confirmation module is used for performing quality inspection on various processed planning resource data before data warehousing, directly performing data warehousing on qualified data, modifying and processing data unqualified in quality inspection aiming at specific problems, and warehousing after qualified data is inspected again, so that the data quality problem is avoided from the source, and the accuracy and the correctness of all the warehoused data are ensured;
the data warehousing and publishing module is used for dividing data warehousing work into data simulation warehousing and data formal warehousing, slicing various basic and professional databases and establishing indexes so as to manage and apply basic and professional data in daily life;
a data center management module that manages data in a centralized manner and includes management functions of metadata management, data quality inspection, data conversion, data set management maintenance, data editing, thematic mapping, map distribution, data backup, data security, and system maintenance.
As a further preferred technical solution of the above technical solution, the data resource directory system building module includes a physical database directory unit, a data resource general directory unit, a data resource public service directory unit, and a data resource sharing service directory unit, wherein:
the physical database directory unit is used for providing a unified physical database directory, can uniformly manage all physical data and files related to the platform, forms a data resource key information list by recording physical information metadata, and effectively integrates and standardizes data information;
the data resource general catalog unit is used for establishing a general catalog of all online data resources related to space basic information management and service, and performing recording of related metadata information and optimization of a data storage structure (more effective operation services such as data storage, management and calling are performed, and data need to be classified according to service types);
the data resource public service directory unit comprises a directory of related thematic information opened to the public so as to check various public information according to the public service directory;
the resource data sharing service directory unit is used for compiling and recording sharing data information and developing a data resource sharing service directory, comprises the coverage range, the identification and the subject key information of sharing data, and compiles and records related metadata information so as to meet the sharing requirements of data resources and realize the sharing and exchange of the data resources in each department.
As a further preferred technical solution of the above technical solution, the data analysis module includes a data analysis content unit, a data analysis method unit, and a data analysis result unit, wherein:
the data analysis content unit is used for overall analysis and detailed analysis of single thematic data and correlation analysis of associated data;
the data analysis method unit is used for carrying out different analysis methods according to different data types, and the analysis methods comprise a script analysis method, a system/tool auxiliary method, a manual query/interview method and a data analysis flow method;
the data analysis result unit is used for forming a data analysis report by the data analysis process and the result so as to introduce data outline, problem and data utilization suggestion.
As a further preferred technical solution of the above technical solution, the map vectorization module includes a table structure planning unit, a database and layer creation unit, a map registration unit, a vectorization acquisition unit, an attribute assignment unit, and a result checking unit, wherein:
the table structure planning unit is used for analyzing and planning the content to be digitized in combination with data condition analysis, preliminarily analyzing the content of the corresponding theme, element type and key field of each layer, and further planning and designing the corresponding table structure in combination with service requirements to form a database standard;
the database and layer creation unit is used for creating corresponding element layers, defining a database and an element set, setting data projection coordinate parameters for the element set as a whole, and defining thematic layers (namely element types, element types and specific attribute fields) one by one according to professional layering;
the map registration unit is used for scanning and registering after the map is scanned, checking a scanned raster image to ensure that vectorization work is carried out smoothly, loading a scanned image and registering the map, selecting known special point coordinates as control points in the map registration process, and resampling the registered image according to a set transformation formula after the control points are selected;
the vectorization acquisition unit is used for uniformly loading the registered images and the pre-created element sets to the current desktop environment, vectorizing the newly-created elements one by one under a proper scale by starting an editor and setting a digital image layer, and assigning attribute items;
the attribute assignment unit is used for synchronously inputting the key attributes of the map into the database in time in the map digitalization process;
the result checking unit is used for checking the digitalized result, the layer storage, the attribute information and various topological relations so as to ensure the accuracy of the result.
To achieve the above object, the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the digital twin full lifecycle management platform when executing the program.
To achieve the above object, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the digital twin full lifecycle management platform.
Drawings
FIG. 1 is a schematic diagram of the architecture of the digital twin full lifecycle management platform of the present invention.
FIG. 2 is a data warehousing flow diagram of the digital twin full lifecycle management platform of the present invention.
Detailed Description
The following description is presented to disclose the invention so as to enable any person skilled in the art to practice the invention. The preferred embodiments in the following description are given by way of example only, and other obvious variations will occur to those skilled in the art. The basic principles of the invention, as defined in the following description, may be applied to other embodiments, variations, modifications, equivalents, and other technical solutions without departing from the spirit and scope of the invention.
In the preferred embodiment of the present invention, those skilled in the art should note that the user layer, server, etc. involved in the present invention can be regarded as the prior art.
Preferred embodiments.
The invention discloses a digital twin full-life cycle management platform, which is used for full-life cycle management and comprises a basic layer, a data layer, a supporting platform layer, a business application layer and a user layer, wherein:
the base layer is used for providing a hardware environment and a network environment for the whole platform to run, and comprises a server, a storage system, a backup disaster recovery module, an operating system and network equipment;
the data layer is used for monitoring and detecting various types of data and comprises a business database (provided with business data) and a model database (provided with model data);
the supporting platform layer is used for supporting development and operation of the whole platform (including CESUM development and operation platform, mongodb database platform, development platform and operation platform and other various basic platforms);
the service application layer is used for integrating all service systems and enabling the service systems to communicate with one another, so that seamless switching among a plurality of service systems is supported, and common application and analysis service (including a monitoring, supervision and early warning system, a large-screen display system and a three-dimensional GIS application system) in the service systems are supported;
and the user layer is used for each user to perform platform login in a single-point mode, so that the operation is performed according to the authority range of the current user.
Specifically, the construction (requirement) of the database in the data layer includes:
the data resource system construction module is used for combing data resources, analyzing the levels, categories and relations among various types of data, uniformly planning the data resources, formulating a uniform data resource coding and classifying system and establishing a space basic information data directory to form a space data resource system;
the data resource directory system construction module is used for uniformly planning data resources on the basis of collecting and analyzing data information of each node, and respectively establishing a main node data resource system and a sub-node data resource system according to uniform data resource coding and classification;
the unified data model design module organizes and describes information by taking the space object as a core according to unified definition, and codes the space object according to a unified rule so as to form an object entity model and a relation model, and further is used for describing the space characteristics, the service characteristics, the logic key and the service behavior of the object;
the data logic model module comprises an entity object model and a data relation model;
the data analysis module is used for analyzing various existing stock data by combining a data library building target and a construction requirement, figuring out the data quality and the defect condition, analyzing the difference between the current data and the target data in structure, content and form so as to combine various available resources, confirm the work content, the approximate workload and the basic work method which need to be finished to achieve the library building target, and arrange to form a data analysis report;
the map vectorization module is used for vectorizing a part of image results to build a database by combining data application requirements, so that data query, analysis and statistics are supported;
the data standardization conversion module is used for converting the attribute data and the graphic data of the existing data result so as to ensure that the multi-metadata result forms basic data meeting the requirement of a target structure;
the map symbolization module is used for setting element display symbols under different display scales;
the data detection and confirmation module is used for performing quality inspection on various processed planning resource data before data warehousing, directly performing data warehousing on qualified data, modifying and processing data unqualified in quality inspection aiming at specific problems, and warehousing after qualified data is inspected again, so that the data quality problem is avoided from the source, and the accuracy and the correctness of all the warehoused data are ensured;
the data warehousing and publishing module is used for dividing data warehousing work into data simulation warehousing and data formal warehousing, slicing various basic and professional databases and establishing indexes so as to manage and apply basic and professional data in daily life;
the data center management module is used for managing data in a centralized manner and comprises management functions of metadata management, data quality inspection, data conversion, data set management and maintenance, data editing, thematic mapping, map publishing, data distribution, data backup, data safety and system maintenance.
More specifically, the data resource directory system building module includes a physical database directory unit, a data resource general directory unit, a data resource public service directory unit, and a data resource sharing service directory unit, where:
the physical database directory unit is used for providing a unified physical database directory, can uniformly manage all physical data and files related to the platform, forms a data resource key information list by recording physical information metadata, and effectively integrates and standardizes data information;
the data resource general catalog unit is used for establishing a general catalog of all online data resources related to space basic information management and service, and performing recording of related metadata information and optimization of a data storage structure (more effective operation services such as data storage, management and calling are performed, and data need to be classified according to service types);
the data resource public service directory unit comprises a directory of related thematic information opened to the public so as to check various public information according to the public service directory;
the resource data sharing service directory unit is used for compiling and recording sharing data information and developing a data resource sharing service directory, comprises the coverage range, the identification and the subject key information of sharing data, and compiles and records related metadata information so as to meet the sharing requirements of data resources and realize the sharing and exchange of the data resources in each department.
Further, the data analysis module comprises a data analysis content unit, a data analysis method unit and a data analysis result unit, wherein:
the data analysis content unit is used for overall analysis and detailed analysis of single thematic data and correlation analysis of associated data;
the data analysis method unit is used for carrying out different analysis methods according to different data types, and the analysis methods comprise a script analysis method, a system/tool auxiliary method, a manual query/interview method and a data analysis flow method;
the data analysis result unit is used for forming a data analysis report by the data analysis process and the result so as to introduce data overview, problem existence and data utilization suggestion.
Furthermore, the map vectorization module comprises a table structure planning unit, a database and layer creation unit, a map registration unit, a vectorization acquisition unit, an attribute assignment unit and a result checking unit, wherein:
the table structure planning unit is used for analyzing and planning the content to be digitized in combination with data condition analysis, preliminarily analyzing the content of the corresponding theme, element type and key field of each layer, and further planning and designing the corresponding table structure in combination with service requirements to form a database standard;
the database and layer creation unit is used for creating corresponding element layers, defining a database and an element set, setting data projection coordinate parameters for the element set as a whole, and defining thematic layers (namely element types, element types and specific attribute fields) one by one according to professional layering;
the map registration unit is used for scanning and registering after the map is scanned, checking a scanned raster image to ensure that vectorization work is carried out smoothly, loading a scanned image and registering the map, selecting known special point coordinates as control points in the map registration process, and resampling the registered image according to a set transformation formula after the control points are selected;
the vectorization acquisition unit is used for uniformly loading the registered images and the pre-created element sets to the current desktop environment, vectorizing the newly-created elements one by one under a proper scale by starting an editor and setting a digital image layer, and assigning attribute items;
the attribute assignment unit is used for synchronously inputting the key attributes of the map into the database in time in the map digitalization process;
the result checking unit is used for checking the digitalized result, the layer storage, the attribute information and various topological relations so as to ensure the accuracy of the result.
Preferably, the unified data model design module further comprises:
unified object definition
The space data management basic graphic elements (land blocks) are spatialized to form space objects for space data resource management, and the space objects are used as cores to organize and describe information according to a uniform definition. And directly linking basic information which is relatively difficult to change and reflects the basic characteristics of the space object with the object in a related manner to form the basic description of the object. And indirectly associating and hooking the information which is easy to change with the space object to form the service description of the object.
Meanwhile, in order to ensure the uniqueness of the spatial data object in the whole data resource, the spatial data object is coded according to a unified rule and has an independent unique identification number, so that the basis and the link of business association, information backtracking and comprehensive analysis are formed.
Unified data logic model
1) Solid object model
The organization and the storage of the space object, the basic information, the project information and the service information are realized through the entity object model.
In the model, the association between the basic information and the space object is realized through uniform space object coding, and the association between the project information and the service information and the space object is realized through the correspondence between the space object coding and the management information coding.
2) Data relationship model
And constructing a data relation model including a space relation model, a business relation model and a temporal relation model according to the spatial data business relation combing.
A spatial relationship model: the spatial relations such as containing, capping, spanning, connecting, disjointing and the like among the spatial objects are recorded through the spatial relation table, the spatial relations are automatically created through spatial superposition and are recorded in the spatial relation table to form the spatial relation fast index.
The business relation model is as follows: the data objects generate associated information in the process of service flow conversion, wherein the associated information comprises project information and service information, and the association between the project information and the service information of each object is recorded through the relation table of the objects and the management information.
Temporal relation model: the time, the object change and the information change are taken as description objects, the historical processes of the object change, the attribute change and the service change of the whole life cycle of the data object are recorded through a temporal relation table, and information support is provided for the backtracking of the historical information of the resources.
The data temporal update relation model is designed as shown in the figure, the objects 1 and 2 complete data update in a buffer resource pool, coding information, attribute information and service information before and after the change are recorded in an object change table, and final result data and an associated table are stored in an integrated database after data extraction.
3) Data logical relationship diagram
And establishing a logical relation between the spatial data based on the unified object definition and the data logical model. Based on the logical relations, the current data, the management data and the social and economic data objects are used as drives to form a finished flow closed loop. On the basis, the incidence relation, the evolution flow and the life cycle among the internal objects are combed to obtain the detailed logic relation among various data.
4) Data association certificate
And identifying the basic information, the project information and the business information of the objects and the association relation between the basic information, the project information and the business information according to the state object model and the data management model, thereby establishing the graphic evidence and the logic evidence among the data objects.
Preferably, for the data analysis module:
1) bulk analysis
The data overall analysis is to analyze the data storage condition, the content stored in each table space, the number of records in each table, whether the service data is missing, and the like, so as to grasp the data overall condition.
2) Detailed analysis
The detailed analysis is to analyze the content stored in each table, the content stored in each field, and the business logic association between the tables, so as to determine the content such as the corresponding relationship between the current database and the target database. In addition, the detailed data analysis also needs to analyze and count the normalization, effectiveness and missing condition of the index items of the heavy data according to the business rules so as to solve the problems in a specific manner aiming at specific problems in the data processing process.
3) Correlation analysis
The correlation analysis is to analyze the possible incidence relation between the analysis subjects, determine whether the incidence relation exists, whether the incidence relation can be established, and the consistency of the related data, and know the incidence and consistency of various data through the correlation analysis.
Preferably, for the data analysis method unit:
1) script analysis method
For the condition that the system database data and the analysis rule have certain regularity, the data analysis can be carried out by a script writing method. And completing conventional analysis including data total amount, data non-null items, data normalization, data logic consistency and the like. Depending on the versatility of the analysis script, customization to a specific analysis tool for reuse may also be considered.
2) System/tool aiding method
For data problems without obvious regularity, possible problems can be generally found through a system checking method. In addition, for the graph data problem, data analysis can be assisted by a graph tool such as ArcGIS and the like through means such as topology analysis and the like.
3) Manual consultation/interview method
For data which is irregular and cannot be analyzed through a system or a tool, the method of manual spot check is considered to find possible problems. In addition, special situations of the data can be known through interviews and communication with business personnel.
4) Data analysis flow
The data analysis comprises the links of data collection, data management, data analysis, problem induction, analysis report formation and the like.
Preferably, for the data normalization conversion module:
attribute data conversion unit
1) Data conversion mode
1. And (4) direct conversion, namely the original table field and the target table field are consistent in field name and field type, and the original table content is directly corresponding to the new table.
2. And code comparison, namely using different series of enumerated values for the original table field and the target table field, but having a corresponding relationship, establishing a code comparison table, and realizing comparison conversion according to the code comparison table.
3. Type conversion, i.e. the same field content, the field types or expression patterns in the original database and the target database are not consistent, and conversion is required. Such as long shaping, shaping and the like.
4. And constant conversion, namely filling default values according to a preset rule when the target field is a newly added field or the content is empty.
5. And (4) not converting, and not converting the fields which are not required by the target library.
2) Normalization process
The attribute data standardization processing mainly solves the following two problems:
1. the problem of homonymy and heteronymy is solved, and description that no semantic inconsistency exists in all information is realized.
2. The problem that data types, decimal point numbers and number units are not uniform is solved, and the same data types are completely consistent within the same administrative region range.
3) Defining data mapping relationships
On the basis of data standardization, various data achievements are converted into a core database by utilizing a set conversion tool and a mapping rule according to the standard of a core database of planning resources.
The data mapping relation comprises a thematic mapping relation, a table mapping relation and a specific field mapping relation.
4) Translation execution and checking
And developing a data conversion tool according to the data mapping rule, and converting the data result. And the conversion result is verified and checked.
Graphic data conversion unit
1) Vector data conversion
The graphic data structure conversion target is to convert the current graphic data format into a data format that meets the requirements of the target library. According to the graphic data storage mode, the storage graphic data formats are different and comprise MagGIS, Geodatabase, Supermap and the like. In addition, the non-system-managed graphic data may also include file formats such as CAD, TXT, XML, and the like.
The data structure conversion process is realized by adopting a conversion tool developed based on ArcEngine, and the tool integrates the powerful data format conversion function and coordinate system conversion function of ArcGIS. And convenience is provided for the standardized conversion of various spatial data.
The system supports interconversion of multiple data formats, including:
1. interconversion of ArcGIS' own data formats, including SHP, GeodataBase, E00, etc.;
2. text data such as txt, excel and the like and ArcGIS data formats are mutually converted;
3. MapGIS and ArcGIS data transformation (one-way);
4. ArcGIS and other format conversions, such as CAD, MapInfo, XML, and the like.
2) Boundary point file top map
The endpoint file is typically stored in txt format. The organization forms of the landmark files in different years are different, such as a city mark, a department mark and other types. The key point of the map on the boundary point file is to identify the type of the boundary point, and the txt coordinate storage forms of other types are arranged into a standard file form so as to be conveniently converted in a unified manner. Paper coordinate input
The paper coordinate input map is acquired according to project boundary point information recorded in the project survey delimiting result. The outcomes from the use include coordinate points recorded with the survey boundary report, or from plots on coordinate points marked on the survey boundary plot. The paper coordinate input is firstly determined as a graph coordinate system, and then point-by-point acquisition and input are carried out. Coordinate acquisition the upper map can be directly subjected to point stamping drawing in ArcGIS software. Or after drawing by using familiar map software, the map can be converted into a uniform format.
Preferably, for the map symbolization module:
data processing unit
1) Data inspection repair
In general, data converted from other formats or digitized data may have topology errors or other problems, so that it is necessary to perform inspection and repair work on the data, or other data processing work may be affected. A common data inspector in AcrGIS has: check Geometry, Repair Geometry, and Topology Tools.
Repeating data processing unit
When a large amount of point data exists in the data, a situation that a plurality of points coincide may occur, for example, the workload is very large by manually searching one by one, and ArcGIS provides a function of searching elements within a specific distance, and the use method is as follows: opening Analysis Tools- > Proximity- > Point Distance under an ArcToobox tool, in a pop-up Point Distance dialog box, only setting the Distance to be 0, and selecting the same element class for the input element and the nearby element to find out all coincident points.
3) Logical relationship checking
Some data may cause logical errors due to some problems of its own. For example, some POIs (points of interest) fall into a water system surface or on a road surface, and the work load is large and the efficiency is low only by manual inspection. In ArcMap, the point of non-compliance with the logic can be found by the Select- > Select by Location command.
4) Data clipping
The tool is used to cut out a portion of the element class with one or more of the other element classes as a "die".
1) Polygonal aggregation
The polygon aggregation tool Aggregate Polygons mainly performs operation by giving aggregation distance and area, an output aggregation rectangle does not have attribute information, a one-to-many attribute table is output, and aggregation cannot be performed on a polygon.
2) Map projection
If the map has no projection information, which is stored in the PRJ file of the data set, it will be listed as "unknown". The map Projection is defined by the define project tool.
3) Other data processing
Other common data processing also includes: centerline extraction (Collapse Dual Lines To center), data edge joining (Spatial adaptation- > edge match), curve simplification tool (Simplify Line), Building simplification tool (Simplify Building), Polygon simplification tool (Simplify Polygon), and curve smoothing (smoothline) tool processing. It is also possible to involve deletion of sensitive data, and uniform modification of field names, etc., depending on the actual situation. In short, the data processing process needs to determine the processing scheme for specific data situations, and the mapping effect after processing is also considered. And after the data processing is finished, entering the flow of electronic map matching.
4) Layer hierarchy
And carrying out map grading according to specific requirements. Map display scales at different levels are predefined.
5) Layer grouping
Managing data by creating a layer group in an AcrGIS, and classifying the data according to a displayed scale range; the display scale setting is carried out on the layer group only once, and all the layers do not need to be set one by one; meanwhile, one graph layer group corresponds to one display scale, so that different scales can be freely combined to cut the graph or can be independently performed during graph cutting.
6) Attribute table processing
The classification and coding of the geographic entity data suggest that the existing national standard or industry standard is adopted as much as possible, and the geographic entity data can be combined and expanded when necessary. When there is no specific information classification code, a new field (e.g., ClassID) can be constructed in the attribute table as its information classification code. This can be used as a unique field for cartographic symbolization when matching.
1.1.7.2 map symbolization
1) Symbolic variable
1. Shape variable
The figure is the outline shape of the symbol figure, is a visually distinguishable single body of a geometric figure and is used for reflecting the quality difference of drawing elements. The shape of the linear element or the planar element depends on the spatial distribution characteristics of the geographic element itself. The shape design in the map design is mainly the shape design of a dot-shaped symbol.
2. Dimensional variable
Dimensional variables refer to the measured changes in length, width, height, area, size, etc. of the constituent symbols. And is commonly used to express differences in quantitative characteristics of geographic elements, including arbitrary ratios, relative ratios, and absolute ratios.
3. Variable of direction
Symbols applied in long or linear form often represent the spatial distribution of the drawing object. The direction variable includes two levels, namely: the change in orientation of the symbol pattern itself and the change in orientation of the texture in the texture.
4. Color variation
Often used in conjunction with shape, is the most active visual variable. Colors include chromatic and achromatic colors, which include three components of hue, brightness, and chroma. Achromatic colors have only a brightness property. The color variable plays roles of enhancing aesthetic feeling, improving definition, increasing map information load capacity and the like in map design. Can be used for qualitative or quantitative characterization.
5. Variation of net grain
The texture is a pattern formed by combining dots, lines and dot lines having a certain shape and size and is arranged/filled in a certain manner. Different density and thickness cross hatch symbols are used for representing the primary and secondary grade or quantity characteristics of the drawing object.
2) Surface layer configuration
The symbolization of the surface layer is mainly to configure surface filling colors and area dynamic notes. Before color matching is integrated, a specific filling color list is collected and designed. The dynamic annotations are set as follows:
incomplete notation: namely, when the map spots are moved/roamed and viewed in the map view, the dynamic annotation with incomplete indexes is allowed, and the dynamic annotation position adjustment caused by annotation avoidance is avoided. If this setting is not selected, the tile clipping effect will be affected.
Note display ratio: the display ratio range of the dynamic note of the setting surface.
3) Line pattern layer configuration
The line drawing layer configuration includes a line type, a symbol rate, a fixed symbol size drawing minimum display rate. And when a single line layer needs to be provided with a plurality of legend symbols, parameter setting is respectively carried out after field attribute classification field extraction.
4) Dot pattern layer configuration
And determining a matching scheme according to the number of points in the point map layer, and if the number of the points and the types are not large, performing hierarchical display by using thematic maps according to the types by using the same processing method as the line map layer. If the data amount of the dot diagram is huge and the data types are many, only the dot thinning method can be used for data processing. For example, the points are extracted synthetically by drawing, or the annotation display is configured with dynamic annotations.
5) Map annotation
Map annotation refers to the annotation and various written descriptions on a map, and is also a category of map symbols. Map annotations may be used to indicate drawing objects (such as name, location, and type), object attributes (various types of descriptive text, numeric annotations), and descriptive functions, to facilitate the reader's correct interpretation of map symbols.
1. Type of note
The map notes include name notes, description notes, number notes and map-sheet notes.
a) Font: and identifying the category and form of the drawing object, such as Song dynasty body for rural residential areas and left-oblique or right-oblique Song dynasty body for water system names.
b) Font size: reflecting the level and importance of the annotated object. The font with high important transaction level is large, and the other way round is small.
c) Character color: the drawing object is represented by the property of a category, such as blue for marking a water system and black for marking a residential area.
d) Word pitch: the space between the marked characters is convenient for determining the distribution range of the drawing objects, and the distance between the marked characters of each single object is equal. The distance between the dot-shaped objects and the line-shaped objects is small, the distance between the planar objects and the line-shaped objects is large, and the distance between the planar objects and the line-shaped objects is determined according to the size of the area to be marked.
2. Positioning arrangement
The map annotation is configured to clearly mark the annotated object, arrange the annotated object in a blank as much as possible, cut off other lines or annotations without pressing and cover, and reflect the space distribution characteristics of the annotated ground object.
The dot-like symbols are marked close to the corresponding symbols in a non-spaced close-packed manner, arranged in horizontal lines and disposed more to the right thereof, and the symbols may be arranged in the weft direction or parallel to the top-bottom figure outline.
The notations of the dot-shaped symbols or the strip-shaped distribution elements should be marked along the same side of the line, and are usually designed and arranged horizontally, vertically, wild goose rows or zigzag columns, and the notations axis should be parallel to the symbols or arranged according to the symbol axis.
Notation of planar symbol: and the multipurpose wild goose rows or zigzag columns are arranged in the corresponding areas of the symbols and are distributed along the main axis in the middle of the symbols. On the same map, the configuration mode of the same type of ground object marks is required to be consistent.
Preferably, the map slicing module has the principle that: a caching mechanism is utilized to increase map access speed. The map is divided into a plurality of small pictures in advance for each scale, the small pictures are stored on a server, the required small pictures are directly obtained and spliced into the map when a client accesses the map, the map is not directly and dynamically created into one picture to be sent to the client, and the map access speed is greatly improved.
Preferably. For the data checking and validation module:
data checking unit
1) Examining content
Checking each work of data sorting and database building, wherein the checking content comprises the following steps: checking of basic class data and checking of management class data.
2) Inspection method
The data quality inspection is carried out by adopting a method of combining automatic inspection and human-computer interaction inspection, combining full inspection and sampling inspection and verifying and inspecting a system.
3) Automated inspection and human-computer interaction inspection
And automatically checking the spatial data and the structural data to check whether the spatial data and the structural data are lossless in the conversion process, whether the table structure meets the relevant standard and whether the internal association is consistent with the original data.
And aiming at the data association relationship, performing human-computer interaction check to check whether association errors exist or not, whether the hooking matching rate meets the standard or not, and confirming whether the data which can be associated is hooked or not for a business department.
4) The whole inspection and the sampling inspection are combined
The quality problem may be caused by two aspects in the database construction process, namely software tool and manual operation errors. For the problem possibly generated by the software tool, the integrity check is required to be carried out in a sampling mode, such as a batch operation processing which easily causes data loss.
5) System verification check
The data is the blood of the system through which it ultimately exerts its intended effect. After the data is submitted to a warehouse, the correctness and the effectiveness of the data can be judged in an auxiliary manner according to the running condition of the data in the system.
6) Inspection result processing
And after the result inspection is finished, an inspection report is filled in by an inspector to evaluate the quality condition of the sample. And records the problems found. The inspection report is the basis for quality inspector to confirm whether the result is qualified and to dispose next step. The inspection result processing comprises three forms of confirming and warehousing, returning and modifying and yielding and receiving.
7) Confirm to put in storage
And for the data qualified in the evaluation of the inspection report, the quality inspector evaluates the inspection report and receives and stores the results.
8) Rework modification
For the data problems described in the inspection report, the quality inspector, the inspector and the specific data implementer confirm together, and the quality inspector returns the data and the inspection report to the data implementer to modify the data and the inspection report. And the submitted modification result can be received and put in storage after being confirmed.
9) Yielding reception
After the quality inspector, the inspector and the specific data implementer confirm together, the data problem is determined to be really unalterable, and the quality inspector can give way to receive the data and record the data in the data leaving problem.
10) Data inspection standard
The data quality is mainly specifically controlled according to various principles of data integration and specific requirements of database construction. The data integration is based on the concrete design of the database, and the processing conversion is carried out according to the designed data content and organization structure, so that the data check must be carried out from concrete data element layering, naming modes, attribute structures, specified terms, space ranges and the like before the data are put in storage, and the condition that the data put in storage meet the requirements of database construction is ensured. The specific data quality check is performed from the aspects of data integrity, data logic consistency, data space positioning accuracy, data correctness, data phase requirements and the like.
Preferably, for the data warehousing and publishing module:
data warehousing unit
In order to ensure that the warehousing work does not influence the system operation and ensure the accuracy and the integrity of the content of the data warehousing process, the data warehousing work is divided into two links: firstly, data are simulated and put in storage; and secondly, formally warehousing the data. In order to ensure the smooth development of projects, the accuracy of warehousing data and the like, the warehousing work needs to be strictly arranged.
Data distribution unit
Slicing and indexing various basic and professional databases so as to manage and apply the basic and professional databases in daily life; and the management data is released without slicing, and the management data is released after the indexes are directly established. And supporting data to be issued to different ArcGIS Server servers, and performing unified configuration management on various services through data resource directory management.
1) Layer data publishing service
1. Dynamic publishing
And (4) releasing the map service, setting the released layer attribute, and setting the attribute information of the service.
2. Slice publishing
And the slice issuing means that the picture layer data is divided into slices according to a scale and displayed in a slice proportion when the picture layer is browsed. And map service issuing, selecting an issued map layer, an issued server, an issued service name and a service type of slicing service, and setting slicing.
3. Existing slice publishing
The existing slice publishing means that the picture layer data which is already sliced does not need to be sliced again, the existing slice service can be used, and the service name needs to be consistent with the slice folder for successfully using the slices of the existing service. The map service is published, the published server, the published service name and the service type are published by the existing slice, and the path setting is carried out on the published service.
Preferably, for the data center management module:
metadata management
Supporting the warehousing of metadata; maintaining associations of metadata with data; metadata browsing, querying and statistics summarizing functions are supported; establishing a metadata directory to support standard metadata directory service; the output and printing of the metadata can be realized.
Data import/export
The functions of warehousing, migrating, exporting and the like of various multi-source database data are supported; supporting a data checking function; and the import/export function under a unified environment among various heterogeneous data formats is supported, wherein the import/export function comprises VCT, DWG, SHP, ArcGIS formats and the like. The projection parameter setting is supported, and the projection transformation of vector data and raster data can be realized; support dynamic projection of spatial data; and the method supports coordinate band number removal, band number increase, integral translation, affine transformation, linear transformation and polynomial transformation.
Data quality inspection
According to the characteristics and rules of various data, data inspection work is carried out on various data before warehousing and updating, including integrity inspection of data packets, consistency inspection of business data logic, topological relation inspection and the like, and safety audit work is carried out on inspection results and the like.
The data checking is mainly embodied in two aspects of checking the graph topology and the attribute rule.
And the graphic topological rule checking processing is to manually modify and modify error data in batches according to the self-defined error checking and processing mode rule of the image layer, and to perform multi-layer data superposition checking processing according to the high and low rules of the precision level between different layers, so that the data quality is guaranteed.
And (4) checking the attribute rules, namely performing custom setting of the rules according to related standards and technical rules of different databases, such as the integrity and accuracy of the attributes, and finishing the checking of the attributes.
In addition, logical relations such as validity, reasonableness and consistency of database structures and attribute contents can be checked.
The data quality inspection adopts a data quality inspection model based on knowledge rules, provides functions of data quality inspection and evaluation analysis for a 'one-picture' house and place core database, and comprises an inspection rule customizing module, a data inspection module and a data quality evaluation analysis module.
Data conversion
And providing data conversion function service of various data in different formats. Including the interconversion of DWG, DXF, SHP, ArcGIS formats.
Data set management maintenance
According to different data types and different business application themes, a physical model of related spatial data and attribute data, namely a data set (including entity naming, attribute definition, keyword definition and incidence relation definition) is established. I.e., a customized service to manage object data repositories and other related data sets. The method comprises the operations of creating, matching, modifying, deleting, inquiring and the like of a data set, an entity table and a view; for example: the method comprises the steps of establishing a construction land data set oriented to construction land approval business, carrying out 'overlay view' on basic geography, current land utilization situation, land utilization planning, land parcel ownership graph, remote sensing image graph, agricultural transfer and map supply layer data and the like, and realizing an object-oriented and application-oriented spatial data set.
And the structure maintenance of the layer elements and the attribute tables of different data sets is realized, and the operations of creating, matching, modifying, deleting, inquiring and the like of the field structure are included.
Data editing
Providing various data editing and processing tools, supporting editing processing of spatial data, supporting functions of adding, deleting, editing and editing spatial objects such as points, lines and planes, and supporting automatic edge splicing and manual edge splicing of adjacent maps; the topology generation of the vector data can be carried out, the topology errors are modified, and the export and the deletion of the vector data are supported; editing of vector data may enable rule-based batch processing. Supporting attribute field addition and deletion; the deletion, addition and modification of data records can be realized; the attribute data are exported; and the batch processing of attribute data editing based on rules is supported.
Map publishing
And slicing and indexing various basic and professional databases so as to manage and apply the basic and professional databases in daily life. And the management data is released without slicing, and the management data is released after the indexes are directly established. And supporting data to be issued to different ArcGIS Server servers, and performing unified configuration management on various services through data resource directory management.
Data distribution
Including identity registration, querying, retrieval, browsing, application, auditing, downloading (or offline distribution), user access, logging, etc. The data distribution service is realized through a set of strict and standard management processes (user registration, login, metadata inquiry, data application, application verification, verification result notification, notification offline distribution and the like). The data can be conveniently and directly released to the corresponding equipment of the mobile terminal. And an offline distribution mode of the data is supported in consideration of the safety of the data.
Data backup
1) Data backup
The backup of the database is supported, and the data is backed up regularly and in real time, so that the data is safer, and the data loss and accidental loss are prevented.
2) Data recovery
Disaster recovery measures hold a considerable position in overall database security because it concerns whether a system can recover quickly after experiencing a disaster. Disaster recovery operations can be divided into two categories, full disk recovery and individual file recovery.
Data security management
The system has the functions of user and authority management, system monitoring and the like. The operation safety and the access safety of the system are managed, the normal, safe and reliable operation of the application system is guaranteed, and the management, the distribution and the control of the operation authority and the access are realized.
System maintenance
The system has a data dictionary management function, and comprises data dictionary management of different data such as basic geographic data, land utilization current situation, land planning, town cadastre, construction land and the like, and data dictionary management of various planning resource index data.
Flexible configuration of connection between the support system and the database and the connection pool; the operation log record of the database is provided, and the log file can be browsed, so that the use condition of the data in the database can be tracked.
The invention also discloses an electronic device, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the digital twin full-life cycle management platform when executing the program.
The invention also discloses a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the digital twin full lifecycle management platform.
It should be noted that the technical features of the user layer, the server, and the like related to the present patent application should be regarded as the prior art, and the specific structure, the operation principle, the control mode and the spatial arrangement mode of the technical features may be selected conventionally in the field, and should not be regarded as the invention point of the present patent, and the present patent is not further specifically described in detail.
It will be apparent to those skilled in the art that modifications and equivalents may be made in the embodiments and/or portions thereof without departing from the spirit and scope of the present invention.

Claims (7)

1. A digital twin full lifecycle management platform for full lifecycle management, comprising a base layer, a data layer, a support platform layer, a business application layer and a user layer, wherein:
the basic layer is used for providing a hardware environment and a network environment for the whole platform to run, and comprises a server, a storage system, a backup disaster recovery module, an operating system and network equipment;
the data layer is used for monitoring and detecting various types of data and comprises a business database and a model database;
the supporting platform layer is used for supporting the development and operation of the whole platform;
the service application layer is used for integrating all service systems and enabling the service systems to communicate with each other, thereby supporting seamless switching among a plurality of service systems and supporting common application and analysis service in the service systems;
and the user layer is used for each user to perform platform login in a single-point mode, so that the operation is performed according to the authority range of the current user.
2. The digital twin full lifecycle management platform of claim 1, wherein the construction of the database in the data tier comprises:
the data resource system construction module is used for combing data resources, analyzing the levels, categories and relations among various types of data, uniformly planning the data resources, formulating a uniform data resource coding and classifying system and establishing a space basic information data directory to form a space data resource system;
the data resource directory system construction module is used for uniformly planning data resources on the basis of collecting and analyzing data information of each node, and respectively establishing a main node data resource system and a sub-node data resource system according to uniform data resource coding and classification;
the unified data model design module organizes and describes information by taking the space object as a core according to unified definition, and codes the space object according to a unified rule so as to form an object entity model and a relation model, and further is used for describing the space characteristics, the service characteristics, the logic key and the service behavior of the object;
the data logic model module comprises an entity object model and a data relation model;
the data analysis module is used for analyzing various existing stock data by combining a data library building target and a construction requirement, figuring out the data quality and the defect condition, analyzing the difference between the current data and the target data in structure, content and form so as to combine various available resources, confirm the work content, the approximate workload and the basic work method which need to be finished to achieve the library building target, and arrange to form a data analysis report;
the map vectorization module is used for vectorizing a part of image results to build a database by combining data application requirements, so that data query, analysis and statistics are supported;
the data standardization conversion module is used for converting the attribute data and the graphic data of the existing data result so as to ensure that the multi-metadata result forms basic data meeting the requirement of a target structure;
the map symbolization module is used for setting element display symbols under different display scales;
the data detection and confirmation module is used for performing quality inspection on various processed planning resource data before data warehousing, directly performing data warehousing on qualified data, modifying and processing data unqualified in quality inspection aiming at specific problems, and warehousing after qualified data is inspected again, so that the data quality problem is avoided from the source, and the accuracy and the correctness of all the warehoused data are ensured;
the data warehousing and publishing module is used for dividing data warehousing work into data simulation warehousing and data formal warehousing, slicing various basic and professional databases and establishing indexes so as to manage and apply basic and professional data in daily life;
the data center management module is used for managing data in a centralized manner and comprises management functions of metadata management, data quality inspection, data conversion, data set management and maintenance, data editing, thematic mapping, map publishing, data distribution, data backup, data safety and system maintenance.
3. The digital twin full lifecycle management platform of claim 2, wherein the data resource directory architecture module comprises a physical database directory unit, a data resource general directory unit, a data resource public service directory unit and a data resource sharing service directory unit, wherein:
the physical database directory unit is used for providing a unified physical database directory, can uniformly manage all physical data and files related to the platform, forms a data resource key information list by recording physical information metadata, and effectively integrates and standardizes data information;
the data resource general catalog unit is used for establishing a general catalog of all online data resources related to space basic information management and service, and compiling related metadata information and optimizing a data storage structure;
the data resource public service directory unit comprises a directory of related thematic information opened to the public so as to check various public information according to the public service directory;
the resource data sharing service directory unit is used for compiling and recording sharing data information and developing a data resource sharing service directory, comprises the coverage range, the identification and the subject key information of sharing data, and compiles and records related metadata information so as to meet the sharing requirements of data resources and realize the sharing and exchange of the data resources in each department.
4. The digital twin full lifecycle management platform of claim 3, wherein the data analysis module comprises a data analysis content unit, a data analysis method unit and a data analysis result unit, wherein:
the data analysis content unit is used for overall analysis and detailed analysis of single thematic data and correlation analysis of associated data;
the data analysis method unit is used for carrying out different analysis methods according to different data types, and the analysis methods comprise a script analysis method, a system/tool auxiliary method, a manual query/interview method and a data analysis flow method;
the data analysis result unit is used for forming a data analysis report by the data analysis process and the result so as to introduce data outline, problem and data utilization suggestion.
5. The digital twin full-lifecycle management platform according to claim 4, wherein the map vectorization module comprises a table structure planning unit, a database and layer creation unit, a map registration unit, a vectorization acquisition unit, an attribute assignment unit, and a result checking unit, wherein:
the table structure planning unit is used for analyzing and planning the content to be digitized in combination with data condition analysis, preliminarily analyzing the content of the corresponding theme, element type and key field of each layer, and further planning and designing the corresponding table structure in combination with service requirements to form a database standard;
the database and layer creation unit is used for creating corresponding element layers, defining a database and an element set, setting data projection coordinate parameters for the element set in general, and defining thematic layers one by one according to professional layering;
the map registration unit is used for scanning and registering after the map is scanned, checking a scanned raster image to ensure that vectorization work is carried out smoothly, loading a scanned image and registering the map, selecting known special point coordinates as control points in the map registration process, and resampling the registered image according to a set transformation formula after the control points are selected;
the vectorization acquisition unit is used for uniformly loading the registered images and the pre-created element sets to the current desktop environment, vectorizing the newly-created elements one by one under a proper scale by starting an editor and setting a digital image layer, and assigning attribute items;
the attribute assignment unit is used for synchronously inputting the key attributes of the map into the database in time in the map digitalization process;
the result checking unit is used for checking the digitalized result, the layer storage, the attribute information and various topological relations so as to ensure the accuracy of the result.
6. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements a digital twin full lifecycle management platform as claimed in any of claims 1 to 5 when executing the program.
7. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements a digital twin full lifecycle management platform as claimed in any of claims 1 to 5.
CN202210658459.5A 2022-06-11 2022-06-11 Digital twin full life cycle management platform Pending CN114968984A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210658459.5A CN114968984A (en) 2022-06-11 2022-06-11 Digital twin full life cycle management platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210658459.5A CN114968984A (en) 2022-06-11 2022-06-11 Digital twin full life cycle management platform

Publications (1)

Publication Number Publication Date
CN114968984A true CN114968984A (en) 2022-08-30

Family

ID=82961614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210658459.5A Pending CN114968984A (en) 2022-06-11 2022-06-11 Digital twin full life cycle management platform

Country Status (1)

Country Link
CN (1) CN114968984A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115687480A (en) * 2022-10-31 2023-02-03 朱俊丰 Assembled wisdom garden one-picture system
CN115718744A (en) * 2022-11-28 2023-02-28 北京中航路通科技有限公司 Data quality measurement method based on big data
CN116596461A (en) * 2023-04-21 2023-08-15 华建数创(上海)科技有限公司 Digital system and method for three-dimensional development of rail transit station
KR102642572B1 (en) * 2022-11-18 2024-02-29 이에이트 주식회사 Modeling method and system for managing full cycle of object in linked data structure of digital twin data platform
KR102642571B1 (en) * 2022-11-18 2024-02-29 이에이트 주식회사 Database structure design method and system for storing linked data of digital twin data platform

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115687480A (en) * 2022-10-31 2023-02-03 朱俊丰 Assembled wisdom garden one-picture system
KR102642572B1 (en) * 2022-11-18 2024-02-29 이에이트 주식회사 Modeling method and system for managing full cycle of object in linked data structure of digital twin data platform
KR102642571B1 (en) * 2022-11-18 2024-02-29 이에이트 주식회사 Database structure design method and system for storing linked data of digital twin data platform
CN115718744A (en) * 2022-11-28 2023-02-28 北京中航路通科技有限公司 Data quality measurement method based on big data
CN116596461A (en) * 2023-04-21 2023-08-15 华建数创(上海)科技有限公司 Digital system and method for three-dimensional development of rail transit station
CN116596461B (en) * 2023-04-21 2024-03-26 华建数创(上海)科技有限公司 Digital system and method for three-dimensional development of rail transit station

Similar Documents

Publication Publication Date Title
CN114968984A (en) Digital twin full life cycle management platform
CN112270027B (en) Paperless intelligent interactive examination method for city design based on entity model
CN111680025B (en) Method and system for intelligently assimilating space-time information of multi-source heterogeneous data oriented to natural resources
CN113434623B (en) Fusion method based on multi-source heterogeneous space planning data
US20070162482A1 (en) Method and system of using artifacts to identify elements of a component business model
CN106547853A (en) Forestry big data building method based on a figure
CN112199433A (en) Data management system for city-level data middling station
CN109977162A (en) A kind of urban and rural planning data transfer device, system and computer readable storage medium
CN110119395B (en) Method for realizing association processing of data standard and data quality based on metadata in big data management
CN108564283A (en) Construction quality rating database construction method based on BIM
Xu et al. Developing an IFC-based database for construction quality evaluation
CN105260300B (en) Service test method based on accounting standard universal classification standard application platform
CN112559351B (en) CFD software verification and confirmation database platform
CN113656493A (en) Method and system for constructing digital twin city multi-bank fusion
CN110990620A (en) Intelligent transformer substation drawing and document data management method based on intelligent technology application
CN111708774A (en) Industry analytic system based on big data
CN114880405A (en) Data lake-based data processing method and system
CN114547077A (en) Intelligent processing system and method for basic government affair form data
CN110941629A (en) Metadata processing method, device, equipment and computer readable storage medium
JP2007133624A (en) Information management method and device using connection relation information
CN113868498A (en) Data storage method, electronic device, device and readable storage medium
CN116991931A (en) Metadata management method and system
CN116578614A (en) Data management method, system, medium and equipment for pipeline equipment
CN115795075A (en) Universal model construction method for remote sensing image product
CN115563341A (en) Spatial video field for electric power operation violation identification and intelligent data processing system and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination