WO2023185315A1 - 场景数据的管理方法、装置、电子设备及可读介质 - Google Patents

场景数据的管理方法、装置、电子设备及可读介质 Download PDF

Info

Publication number
WO2023185315A1
WO2023185315A1 PCT/CN2023/077101 CN2023077101W WO2023185315A1 WO 2023185315 A1 WO2023185315 A1 WO 2023185315A1 CN 2023077101 W CN2023077101 W CN 2023077101W WO 2023185315 A1 WO2023185315 A1 WO 2023185315A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
data
event
private
description file
Prior art date
Application number
PCT/CN2023/077101
Other languages
English (en)
French (fr)
Other versions
WO2023185315A9 (zh
Inventor
张克飞
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Publication of WO2023185315A1 publication Critical patent/WO2023185315A1/zh
Publication of WO2023185315A9 publication Critical patent/WO2023185315A9/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • G06F9/4451User profiles; Roaming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6227Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database where protection concerns the structure of data, e.g. records, types, queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/42Syntactic analysis
    • G06F8/427Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading

Definitions

  • the present disclosure belongs to the field of computer technology, and specifically relates to a conference data transmission method, device, system, electronic equipment and readable medium.
  • GIS Geographic Information System
  • Geographic Information System also known as “geographic information system”
  • Geographic information system is a specific spatial information system. The system is used to collect, store, manage, calculate, analyze, display and describe relevant geographical distribution data in the whole or part of the earth's surface (including the atmosphere) space with the support of computer hardware and software systems.
  • the present disclosure aims to provide a scene data management method, device, electronic device and readable medium.
  • a first aspect of the present disclosure provides a scene data management method, which includes:
  • a second aspect of the present disclosure provides a scene data management method, which includes:
  • the scene model is loaded and displayed.
  • a third aspect of the present disclosure provides a scene data management device, which includes:
  • a drawing module configured to draw corresponding scene elements through each element drawing entrance included in the scene editing interface, which includes a plurality of element drawing entrances for drawing different types of scene elements;
  • the acquisition module is configured to obtain the user's private data stored in the private storage space through the private data access entrance in the scene editing interface; and obtain the system public data stored in the public storage space through the public data access entrance in the scene editing interface. data;
  • the editing module is configured to edit each drawn scene element based on the obtained user private data and system public data to obtain a scene model containing multiple scene elements;
  • a generation module configured to respond to the received scene saving instruction, obtain element attribute information of each scene element contained in the scene model, and generate a scene corresponding to the element attribute information of each scene element based on preset specifications Description file, describing the scenario
  • the files are stored in the private storage space to be provided to the scene parser for parsing and loading.
  • a fourth aspect of the present disclosure provides a scene data management device, which includes:
  • an acquisition module configured to, in response to the received scene loading request, acquire a scene description file corresponding to the scene loading request
  • a parsing module configured to parse the scene description file based on preset specifications to obtain element attribute information of each scene element contained in the scene model corresponding to the scene description file;
  • a loading module is configured to load and display the scene model based on the element attribute information of each scene element.
  • a fifth aspect of the present disclosure provides an electronic device, including:
  • processors one or more processors
  • a memory on which one or more programs are stored, when the one or more programs are executed by the one or more processors, so that the one or more processors implement any of the methods described above;
  • One or more I/O interfaces are connected between the processor and the memory, and are configured to implement information exchange between the processor and the memory.
  • a sixth aspect of the present disclosure provides a computer-readable medium on which a computer program is stored. When the program is executed by a processor, any one of the above methods is implemented.
  • Figure 1 is a flow chart of a scene data management method provided by an embodiment of the present disclosure
  • Figure 2 is a flow chart of a scene data management method provided by yet another embodiment of the present disclosure.
  • Figure 3 is a flow chart of a scene data management method provided by another embodiment of the present disclosure.
  • FIG. 4 is a flow chart of the specific implementation of step S330;
  • Figure 5 shows a schematic structural diagram of a scene data management device provided by yet another embodiment of the present disclosure
  • Figure 6 shows a schematic structural diagram of a scene data management device provided by yet another embodiment of the present disclosure.
  • FIG. 7 is a functional block diagram of an electronic device provided by an embodiment of the present disclosure.
  • embodiments of the present disclosure provide a scene data management method that implements a flexible creation process of scene models based on a GIS system and can realize a quick save operation of scene models.
  • the scene data management method provided by the embodiment of the present disclosure can be applied to GIS systems.
  • the method includes:
  • Step S110 Draw corresponding drawing entries through each element included in the scene editing interface
  • Scene elements the scene editing interface contains multiple element drawing entrances for drawing different types of scene elements.
  • the scene editing interface is used to edit the created scene.
  • the scene editing interface contains multiple element drawing entrances for drawing different types of scene elements. Through different element drawing entrances, corresponding types of scenes can be drawn. element.
  • the GIS scene contains various types of scene elements, and various types of scene elements together constitute a complete GIS scene.
  • scene elements include: base layers, map data, 3D models, vector graphics, etc.
  • the present invention does not limit the specific types and quantities of scene elements. Any module or unit that can be used to constitute a GIS scene can be used as a scene element.
  • Step S120 Obtain the user's private data stored in the private storage space through the private data access entrance in the scene editing interface; obtain the system public data stored in the public storage space through the public data access entrance in the scene editing interface.
  • user private data refers to: personal data belonging to the currently logged in user.
  • the user private data of each logged-in user is used by this user, and other users usually do not have permission to access it.
  • the user's personalized data can be stored through user private data, such as facility data within the user's workplace, building data near the user's home, etc.
  • user private data is stored in private storage space and can only be accessed based on the user ID of the corresponding user, and other users have no access rights.
  • system public data In the GIS system, in addition to storing user private data uploaded in advance by users, there is also system public data that can be shared by each logged-in user. Different from the permission access method of user private data, system public data can be accessed by all users. . Regular data in the map drawing process can be stored through system public data, such as public roads, public buildings and other public content. Among them, the system's public data is stored in the public storage space, and each user can access it through the public data access interface.
  • Step S130 Based on the obtained user private data and system public data, edit each drawn scene element to obtain a scene model containing multiple scene elements.
  • each scene element contains a variety of information.
  • the corresponding model resources need to be loaded.
  • different model resources may be stored in private In the storage space and public storage space, correspondingly, based on the obtained user private data and system public data, edit each drawn scene element to obtain a scene model composed of each scene element.
  • Step S140 In response to the received scene saving instruction, obtain the element attribute information of each scene element contained in the scene model, generate a scene description file corresponding to the element attribute information of each scene element based on the preset specifications, and convert the scene description file Stored in private storage space to be provided to the scene parser for parsing and loading.
  • the scene saving instructions can be of various types such as cloud saving instructions or local saving instructions.
  • Default specifications are used to define the data format and storage method of scene description files.
  • a preset specification for generating a scene description file is predefined, through which various types of scene content can be uniformly stored.
  • the scene description file stores element attribute information of each scene element contained in the scene model.
  • the scene description file can be parsed and loaded through the scene parser, thereby quickly restoring the user-configured scene model.
  • this method stores user private data and system public data separately, making it easier to set different data access permissions for different users, thereby achieving the purpose of isolating and protecting user private data.
  • scene data When loading scene data, only the user private data related to the scene and the system public data in the viewport are loaded, which improves data reuse, reduces the amount of data loading, and improves the scene rendering speed.
  • the scene model can be stored as a standardized scene description file, so that the corresponding scene can be quickly parsed and restored through the scene parser to realize scene management operations, thereby realizing scene reuse and reducing Scene management costs are reduced.
  • embodiments of the present disclosure provide a scene data management method that implements the parsing and loading process of scene description files based on a GIS system or other business systems, and can realize rapid loading of scene models.
  • the scene data management method provided by the embodiment of the present disclosure can be applied to GIS systems or other business systems.
  • the method includes:
  • Step S210 In response to the received scene loading request, obtain the scene description file corresponding to the scene loading request.
  • the scene loading request can be triggered through the scene loading entrance in the GIS system, or it can also be triggered through the scene loading entrance in the business system.
  • the business system can communicate with the GIS system to implement specified business functions with the help of map data.
  • the scene description file may be generated in the manner shown in the embodiment shown in FIG. 1 .
  • the scene description file can be either a locally stored file or a file stored in the cloud. Specifically, it can be obtained through the scene identifier included in the scene loading request.
  • Step S220 Parse the scene description file based on the preset specifications to obtain element attribute information of each scene element contained in the scene model corresponding to the scene description file.
  • the preset specification is used to define the data format and storage method of the scene description file.
  • a preset specification for generating a scene description file is predefined, through which element attribute information of various types of scene elements can be uniformly stored.
  • the scene description file can be parsed and loaded through the scene parser, thereby quickly restoring the user-configured scene model.
  • the scene parser can be set up in a GIS system or business system to realize the parsing and loading functions of scene description files.
  • Step S230 Load and display the scene model based on the element attribute information of each scene element.
  • the loaded scene model is displayed in the system interface.
  • this method can quickly parse the scene description file through the scene parser by setting the preset specifications in advance, thereby restoring the corresponding scene model, so as to facilitate the modification, editing and other management operations of the scene model, and realize the duplication of the scene. Used to reduce scene management costs.
  • embodiments of the present disclosure provide a scene data management method to realize scene data management.
  • the scene description file is generated, parsed and loaded, and the scene model can be loaded quickly.
  • the scene data management method provided by the embodiment of the present disclosure can be applied to GIS systems or other business systems.
  • the embodiment shown in FIG. 3 is intended to provide a more detailed description of the embodiments shown in FIGS. 1 and 2 .
  • the method includes:
  • Step S310 Receive and store user private data uploaded to the GIS system.
  • the geographical information data is managed hierarchically.
  • the so-called hierarchical management refers to storing user private data and system public data in different data storage spaces to achieve management of different access rights. For example, store user private data in private storage space, and store system public data in system public space.
  • the private storage space further includes: multiple user private spaces respectively corresponding to different users, wherein the user private spaces of different users are isolated from each other. The user's private space can only be accessed through the corresponding user ID, and other users have no access rights.
  • the system interface of the GIS system contains a private data upload entrance, through which the user's private data can be uploaded to the GIS system.
  • the user's private data can also be called material data, which can be stored in the file corresponding to the user account.
  • the private data upload request in response to the received private data upload request, obtain the user ID and user private data included in the private data upload request; store the user private data in the user private space corresponding to the user ID in the private storage space.
  • the private data upload request is triggered through the above-mentioned private data upload entrance.
  • the user's private data is user material, which is specifically stored in the material management library and uploaded through the material management portal.
  • deploy the GIS system to the user's machine. Specifically, use the installation package to deploy, or use a mirror to deploy. Then, register the user in the GIS system. Finally, upload the user's private data in the material management library for use in subsequent steps.
  • the materials can be pictures, 3D models, json data files, etc.
  • Step S320 In response to the received scene creation request, generate a scene identifier and create scene frame data.
  • This step is mainly used to implement scene creation operations in order to create a user scenario corresponding to the scene identifier.
  • This scene is usually a user-defined scene, which is stored in the user's private space after creation.
  • the scene creation request can be triggered through the "scene management" entrance set in the system interface.
  • the scene creation interface is displayed in the system interface.
  • the scene creation interface includes the scene name setting entrance and the scene introduction.
  • Setting entrance is used to set the auxiliary description information of the scene, so that the available scenes can be quickly filtered from multiple created scenes through the auxiliary description information.
  • the scene identifier is generated, and the scene frame data is created.
  • the scene frame data is used to implement the frame construction of the scene to show the general outline of the scene.
  • a variety of scene frame data corresponding to different types of scenes can be preset, so that scene frame data corresponding to the scene type is created according to the scene type included in the scene creation request.
  • Step S330 In response to the received scene editing request, draw a scene model corresponding to the scene editing request based on the user's private data and the system public data in the GIS system.
  • system public data In the GIS system, in addition to storing user private data uploaded in advance by users, there is also system public data that can be shared by each logged-in user. Different from the permission access method of user private data, system public data can be accessed by all users. . Regular data in the map drawing process can be stored through system public data, such as public roads, public buildings and other public content. Correspondingly, the scenario model is generated based on user private data and system public data.
  • step S330 further includes various sub-steps shown in FIG. 4 . As shown in Figure 4, step S330 includes the following sub-steps:
  • Sub-step S331 Draw corresponding scene elements through each element drawing entrance included in the scene editing interface.
  • the scene editing interface includes multiple element drawing entrances for drawing different types of scene elements.
  • the above scene editing interface is displayed.
  • an element drawing request triggered through an element drawing portal included in the scene editing interface, obtain the element type identifier included in the element drawing request, and draw the scene element corresponding to the element type identifier.
  • scene elements are drawn based on scene frame data.
  • the scene element includes at least one of the following nine types: base layer, map data, road network, vector graphics, 3D model, data visualization, event, special effect, and view.
  • the element drawing entrances used to draw different types of scene elements include: the first element drawing entrance (basic layer drawing entrance) used to draw scene elements of the basic layer type, and the scene elements used to draw the map data type.
  • the second element drawing entrance (map data drawing entrance), the third element drawing entrance (road network drawing entrance) used to draw scene elements of the road network type, and the fourth element drawing entrance used to draw scene elements of the vector graphics type (vector graphics drawing entrance), the fifth element drawing entrance for drawing 3D model type scene elements (3D model drawing entrance), the sixth element drawing entrance for drawing data visualization type scene elements (data visualization drawing entrance) , the seventh element drawing entrance (event drawing entrance) for event type scene elements, the eighth element drawing entrance (special effects drawing entrance) for drawing special effect type scene elements, and the eighth element drawing entrance (special effects drawing entrance) for drawing view type scene elements.
  • Nine element drawing entrance (view drawing entrance). In this example, nine types of scene elements are preset, and an entrance list area is provided in the scene editing interface.
  • the entrance list area displays the drawing entrances for each element.
  • the user can select the corresponding one according to the type of scene element to be drawn.
  • the scene editing interface also includes a scene image drawing area, which is used to display the scene model composed of various scene elements drawn by the user in real time.
  • Sub-step S332 Obtain the user's private data stored in the private storage space through the private data access entrance in the scene editing interface; obtain the system public data stored in the public storage space through the public data access entrance in the scene editing interface.
  • scene elements usually also need to contain corresponding element resource data.
  • element resource data includes: file resource data. , video resource data, image resource data, etc.
  • the private data access interface in response to a private data access request triggered through the private data access portal in the scene editing interface, obtain the user ID of the current user included in the private data access request, and call the private data access interface to access the user ID corresponding to User private space to obtain user private data.
  • the public data access interface in response to the public data access request triggered through the public data access portal in the scene editing interface, is called to access the public storage space to obtain system public data.
  • Sub-step S333 Based on the obtained user private data and system public data, edit each drawn scene element to obtain a scene model containing multiple scene elements.
  • the scene elements contain different types of element resource data
  • different element resource data may be provided by user private data and system public data respectively. Therefore, in this step, based on the obtained user private data and system public data, Edit each drawn scene element to obtain a scene model containing multiple scene elements. It can be seen that scene elements are edited based on user private data and system public data, and the scene model is composed of multiple scene elements.
  • Sub-step S334 In response to the received event configuration request, determine the scene element corresponding to the event configuration request, and configure the scene element with the event class included in the event configuration request Trigger event that matches the type.
  • sub-step S334 is an optional step, and in other embodiments, sub-step S334 may also be omitted.
  • a response event can be added to the scene model.
  • the response event is usually triggered for one or more scene elements in the scene model.
  • the event configuration request may be triggered through the event drawing entry mentioned above, or may be triggered through other methods, which is not limited by the present invention.
  • an event interface is configured for the scene element, and the event-related data corresponding to the triggering event is bound to the event interface, so as to configure the event type of the triggering event through the event interface. It can be seen that in this embodiment, an event interface is configured for a specific scene element, and the event interface is bound to event-related data.
  • event-related data refers to: various types of data related to the triggering event, such as monitoring data after the triggering event is started, alarm data when the triggering event starts subsequent alarm processing, alarm strategies, etc.
  • the event type includes: monitoring type and prediction type.
  • the event-related data corresponding to the triggering event includes at least one of the following: monitoring object data, monitoring strategy data, and event response result data that trigger the event.
  • monitoring object data when configuring a trigger event for the scene element that matches the event type contained in the event configuration request, configure the following information of the trigger event through the event interface: monitoring object, monitoring strategy, and event response result.
  • This event interface is used to set the event attribute information that triggers the event, including at least one of the following: the monitoring object that triggers the event, the monitoring strategy, and the event response result.
  • the scene model in this embodiment can be applied to the application scenario of digital twin city.
  • digital twin city buildings, vehicles, water systems and other scenery in the real world can be presented in a three-dimensional virtual space in a virtual reality manner.
  • create a digital twin scene also called a virtual world, to render the positioning attributes, address and location information of each city's three-dimensional model.
  • a trigger event is configured for the scene object of the 3D model included in the scene model to implement the fire monitoring function. For example, configure a trigger event for the fire protection system in the 3D model.
  • the event type of the trigger event is monitoring type, and the event correlation
  • the data includes: monitoring object data (for example, the ambient temperature value of the fire protection system, the monitored value is obtained through the temperature monitor), monitoring strategy data (for example, the monitoring strategy is to obtain the monitoring value of the temperature monitor every 10 seconds, And compare the monitoring value with the preset fire temperature threshold to determine whether the monitoring value is greater than the preset fire temperature threshold), event response result data (for example, if the monitoring value is greater than the preset fire temperature threshold, control the 3D model
  • the display method changes, such as changing the color of the 3D model, or displaying flame elements in the 3D model to serve as a warning).
  • a trigger event is configured for the scene object of the 3D model included in the scene model to implement the machine life prediction function.
  • a trigger event is configured for a factory machine in a 3D model.
  • the event type of the trigger event is a prediction type.
  • the event-related data includes: monitoring object data (for example, the usage time of the machine), monitoring strategy data (for example, the monitoring strategy is the machine Compare the usage time with the preset life threshold to determine whether the usage time is greater than the preset life threshold), event response result data (for example, if the absolute value of the difference between the usage time and the life threshold is less than the preset value, Then change the color of the machine to serve as a warning).
  • the pre-trained deep learning model can be further combined to obtain multiple parameters associated with the machine, so that the parameter values of multiple parameters can be combined to predict the machine life more accurately.
  • the monitoring strategy is to obtain rainfall monitoring results every 5 minutes, and the event response result is the current An early warning will be issued when the rainfall monitoring result is greater than the preset value.
  • the scene editor provided by the GIS system is used to visually edit the scene model, including drawing features, roads, adding 3D models, visual data, special effects, configuring views, editing key frames, etc.
  • Step S340 In response to the received scene saving instruction, generate a scene description file corresponding to the scene model based on the preset specifications; wherein the scene description file is used to be provided to the scene parser for parsing and loading.
  • the scene parser Upon receiving the scene save instruction, obtain the element attribute information of each scene element contained in the scene model, generate a scene description file corresponding to the element attribute information of each scene element based on the preset specifications, and store the scene description file in In the user's private space, it is provided to the scene parser for parsing and loading.
  • the scene saving instructions can be of various types such as cloud saving instructions or local saving instructions.
  • the element attribute information of each scene element contained in the scene model is determined respectively, and element description data for describing the attribute information of each element is generated based on the preset specifications.
  • a scene description file is generated based on the element description data of the attribute information of each element.
  • the element attribute information includes at least one of the following: element identifier, element type, element size, element orientation, element extension data, and element loading method.
  • the preset specification is used to define the format of the scene description file, and is specifically used to define the mapping relationship between the element attribute information of the scene element and the description specification in the scene description file.
  • the scene saving instruction includes: a cloud saving instruction.
  • the scene description file is associated with the scene identifier and stored in the cloud database through the cloud saving portal.
  • the GIS system analyzes the current scene, generates scene description data in the scene description file according to the scene configuration specifications (i.e., preset specifications), and saves it to the user's private space in the cloud database.
  • the scene saving instructions include: local saving instructions.
  • the scene description file is exported and stored in the user private space of the local database through the local saving entrance.
  • the GIS system when the user clicks the export button, the GIS system generates scene description data according to the scene configuration specifications and exports it as a json file.
  • Step S350 In response to the received scene loading request, obtain the scene description file corresponding to the scene loading request.
  • the scene loading request includes: a cloud loading request.
  • a scene identifier included in the scene loading request is obtained, and a scene description file corresponding to the scene identifier is obtained from the cloud database based on the scene identifier.
  • the scene loading request includes: a local loading request, correspondingly, According to the scene identifier included in the scene loading request, obtain a locally stored scene description file corresponding to the scene identifier.
  • Step S360 Parse the scene description file based on the preset specifications, obtain the scene model corresponding to the scene description file, load and display the scene model.
  • the scene description file is parsed based on the preset specifications to obtain element attribute information of each scene element contained in the scene model corresponding to the scene description file; based on the element attribute information of each scene element, the scene model is loaded and displayed.
  • the element description data contained in the scene description file is obtained, and the element attribute information of each scene element corresponding to the element description data is determined based on the preset specifications; wherein the element attribute information includes at least one of the following: element identifier, Element type, element size, element orientation, element extension data and element loading method.
  • the element description data is used to store element attribute information of scene elements in accordance with the format defined by the preset specification.
  • the scene description file is stored in the user's private space and consists of element description data, and the element description data follows the above-mentioned preset specifications.
  • the element attribute information of the scene elements is described according to the preset specifications to obtain element description data, and the scene description file is constituted by the element description data.
  • each scene element contained in the scene model is obtained, the scene element configured with the trigger event is determined as the target scene element, and an event type corresponding to the target scene element is generated.
  • Matching trigger event Specifically, the event interface configured for the target scene element is determined, the event-related data bound to the event interface is obtained, and the triggering event is generated based on the event-related data.
  • the event type includes: monitoring type and prediction type, and the event-related data bound to the event interface includes at least one of the following: monitoring object data that triggers the event, monitoring strategy data, and event response result data.
  • the scene parser When the scene loading request is a cloud loading request (corresponding to the situation where the scene saving instruction is a cloud saving instruction), when the user uses the scene, the scene parser is added to the business system using an embedded framework (such as web iframe), and The scene identifier is specified in the URL, and the parser will obtain and parse the scene description data from the GIS system based on the scene identifier, and finally display it on the business page.
  • an embedded framework such as web iframe
  • the embedded framework is used to include the scene parser into the business system, and communication mechanisms such as postMessage are used , send the json file data to the parser in the iframe framework, and the parser parses the data according to the scene description, and finally displays it in the business page.
  • steps S350 to S370 are executed by the GIS system.
  • steps S350 to S370 are executed by a business system capable of communicating with the GIS system.
  • the scene description file is parsed through the scene parser set in the business system, and the scene model corresponding to the scene description file is obtained.
  • additional functions such as monitoring and querying can be performed on the loaded scene model.
  • the element attribute information includes element extension data
  • the element extension data of each scene element can be queried and displayed in the business system to provide reference for users.
  • the element extension data includes: auxiliary description information of scene elements.
  • the loading method based on cloud storage can uniformly manage the user's scene model through the cloud database, which is helpful to reduce the management cost of user-side equipment.
  • the loading method based on local storage allows the scene model to be stored directly locally without going through the cloud. For some data with higher security, it can avoid the risk of malicious interception during the transmission process.
  • the above two storage methods can be flexibly set based on the security requirements of the business scenario.
  • WebGL i.e., Web Graphics Library
  • WebGL is a JavaScript API that can render high-performance interactive 3D or 2D graphics in any compatible web browser without using plug-ins.
  • WebGL does this by introducing an API that is very consistent with OpenGL ES 2.0, which can be used in the HTML5 ⁇ canvas> element. This feature allows the API to take advantage of the hardware graphics acceleration provided by the user device, thereby increasing the speed of model drawing.
  • the above embodiments can solve the current shortcomings and defects that require multiple developments for similar needs, provide a management system that can describe geographic information scenarios, and has the characteristics of clear process, time saving, labor saving, convenient maintenance, rapid update, etc. Can effectively improve work efficiency and satisfy Meet the needs for display and calculation analysis of various geographical information data.
  • the hierarchical management of geographical information data is realized using the hierarchical model of geographical information data.
  • the geographical information data in the actual scene is divided into three categories: system public data (i.e., public data), user private data (i.e., user's static data), and scene-specific data (i.e., the scene description file corresponding to the scene model).
  • system public data is provided by the GIS system; user private data is uploaded by users and can only be accessed by users themselves; in various usage scenarios, scenario-specific data can be generated based on system public data and user private data, that is, in line with geographical information scenarios Scenario data that describes language specifications.
  • One data layer of the data layered model is the user's private data layer.
  • This layer stores the following types of data: map management data, road network data, terrain data, urban building data, water system data, vector data, regional division data, and indoor map data. , POI data, material data, data interface, BIM, CIM, equipment data, control interface.
  • Another data layer of the data layered model is the system public data layer, also called the basic data and capability layer.
  • This layer stores the following types of data: basic map data, three-dimensional terrain data, map perspective operations, map layers, map controls, 3D tile hosting workflow, online map layers, vector data support, event engine, model support, GeoJSON support, POI points of interest, map objects, data visualization, video fusion, scene roaming, path planning, underground pipe network, urban whiteboard Model, model unitization, scene background sky box, weather special effects, spatial analysis calculation, BIM model analysis and virtual simulation.
  • Scene-specific data specifically includes the following types of data: map parameter configuration, overlay properties (customizable), event triggering, data binding, visual data, 3D models, interactive elements, navigation, perspective, story, chart, action, special effects , material citation.
  • This specification abstracts geographical information scenes into nine data types such as basic layers, user maps, road networks, vector graphics, views, 3D models, events, special effects, and data visualization. It adopts json objects. organized in a way. Each data type is a list, which includes data elements, which can be mapped to the corresponding feature objects on the map, and its configuration is described using json objects. If needed later, these nine data types can be expanded to support more scenarios.
  • scene data that conforms to this specification is formed, which can be read by the scene parser of the GIS system to restore the geographical scene for display.
  • the GIS system based on the geographical information data hierarchical model and the scene description language specification, the GIS system generates scene description data according to the visual scene configured by the user for the purpose of displaying the scene.
  • the specific process includes the following operations: hierarchical management of geographic information data, unified abstract description of GIS data used by users, and formation of geographic information scene description language specifications.
  • use the scene data identifier to obtain the scene data or use an offline file to import the scene data, thereby inputting the scene data into the GIS system to restore the scene.
  • the hierarchical model can effectively manage geographic information data, isolate user private data, reduce the amount of scene data, and improve loading speed; secondly, the static map data is hosted in the cloud to achieve distributed loading and on-demand loading, improve loading and running speed, and optimize users experience; thirdly, standardize the description of the GIS scene, generate configured data, and use the GIS system to manage it uniformly, avoiding the disadvantages of needing to develop each scene and improving work efficiency; finally, a lightweight cloud scene management solution is provided And a scene file management solution suitable for offline environments. It is suitable for both online and offline scenarios and a variety of terminals. It is more adaptable and can meet various usage scenarios. In addition, it provides a unified specification for describing geographical information scenes, making various scene data standardized and easier to manage.
  • this method can realize various functions such as monitoring or prediction by configuring events.
  • This solution can load the scene model in the GIS system into the business system and combine the characteristics of the business system to realize the monitoring or prediction function, and , can query the auxiliary description information of elements in the business system through element expansion data, so as to add the content required in the business system to the scene model of the GIS system in the form of element expansion data, thus providing information for the use of the business system.
  • the business system can be the digital twin city system mentioned above, or various systems that need to cooperate with GIS models such as population migration prediction systems.
  • FIG. 5 shows a schematic structural diagram of a scene data management device that provides another embodiment of the present disclosure.
  • the management device includes:
  • the drawing module 51 is configured to draw corresponding scene elements through each element drawing entrance contained in the scene editing interface, which contains a plurality of element drawing entrances for drawing different types of scene elements;
  • the acquisition module 52 is configured to obtain the user's private data stored in the private storage space through the private data access entrance in the scene editing interface; and obtain the system stored in the public storage space through the public data access entrance in the scene editing interface. public data;
  • the editing module 53 is configured to edit each drawn scene element based on the obtained user private data and system public data to obtain a scene model containing multiple scene elements;
  • the generation module 54 is configured to obtain the scene in response to the received scene saving instruction.
  • element attribute information of each scene element contained in the scene model generate a scene description file corresponding to the element attribute information of each scene element based on preset specifications, and store the scene description file in the private storage space, Provided to the scene parser for parsing and loading.
  • the private storage space further includes: a plurality of user private spaces respectively corresponding to different users;
  • the acquisition module is specifically configured to: determine the user private space corresponding to the current user in the private storage space according to the user identification, and obtain user private data from the user private space corresponding to the current user;
  • the device also includes:
  • the upload module is configured to respond to the received private data upload request, obtain the user ID and user private data contained in the private data upload request; store the user private data in the private storage space and the The user's private space corresponding to the user ID.
  • the device also includes:
  • a configuration module configured to respond to the received event configuration request, determine a scene element corresponding to the event configuration request, and configure a triggering event for the scene element that matches the event type contained in the event configuration request.
  • the configuration module is specifically configured as:
  • An event interface is configured for the scene element, and event-related data corresponding to the triggering event is bound to the event interface, so as to configure the event type of the triggering event through the event interface.
  • the event type includes: monitoring type and prediction type; then the event-related data corresponding to the triggering event includes at least one of the following: monitoring object data of the triggering event, monitoring strategy data and incident response result data.
  • the generation module is specifically configured as:
  • the element attribute information includes at least one of the following: element identifier, element type, element size, element orientation, element extension data, and element loading method.
  • the scene saving instructions include: cloud saving instructions, then the generation module is specifically configured as:
  • the scene description file is associated with the scene identifier and stored in the user's private space of the cloud database through the cloud storage portal.
  • the generation module is specifically configured to: export and store the scene description file into the user private space of the local database through the local saving entrance.
  • Figure 6 shows a schematic structural diagram of a scene data management device according to yet another embodiment of the present disclosure.
  • a scene data management device includes:
  • the acquisition module 61 is configured to respond to the received scene loading request and obtain the scene description file corresponding to the scene loading request;
  • the parsing module 62 is configured to parse the scene description file based on preset specifications to obtain element attribute information of each scene element contained in the scene model corresponding to the scene description file;
  • the loading module 63 is configured to load and display the scene model based on the element attribute information of each scene element.
  • the loading module is specifically configured as:
  • Each scene element included in the scene model is obtained, the scene element configured with the trigger event is determined as the target scene element, and a trigger event matching the event type corresponding to the target scene element is generated.
  • the loading module is specifically configured as:
  • the event type includes: monitoring type and prediction type
  • the event-related data bound to the event interface includes at least one of the following: monitoring object data that triggers the event, monitoring strategy data and incident response result data.
  • the parsing module is specifically configured as:
  • the element attribute information includes at least one of the following: element identifier, element type, element size, element orientation, element extension data, and element loading method.
  • management device shown in Figure 5 can be integrated in the GIS system
  • management device shown in Figure 6 can be integrated in either the GIS system or the business system.
  • management devices in Figure 5 and Figure 6 can also be integrated into the same management device and jointly set up in the GIS system.
  • an electronic device which includes:
  • processors 901 one or more processors 901;
  • the memory 902 has one or more programs stored thereon. When the one or more programs are executed by one or more processors, the one or more processors implement any one of the above scene data management methods;
  • One or more I/O interfaces 903 are connected between the processor and the memory, and are configured to implement information exchange between the processor and the memory.
  • the processor 901 is a device with data processing capabilities, including but not limited to a central processing unit (CPU), etc.
  • the memory 902 is a device with data storage capabilities, including but not limited to random access memory (RAM, more specifically Such as SDRAM, DDR, etc.), read-only Memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory (FLASH); I/O interface (read-write interface) 903 is connected between the processor 901 and the memory 902, and can realize the connection between the processor 901 and the memory 902 Information exchange, including but not limited to data bus (Bus), etc.
  • RAM random access memory
  • ROM read-only Memory
  • EEPROM electrically erasable programmable read-only memory
  • FLASH flash memory
  • I/O interface (read-write interface) 903 is connected between the processor 901 and the memory 902, and can realize the connection between the processor 901 and the memory 902 Information exchange, including but not limited to data bus (Bus), etc.
  • processor 901, memory 902, and I/O interface 903 are connected to each other and, in turn, to other components of the computing device via a bus.
  • This embodiment also provides a computer-readable medium on which a computer program is stored.
  • the program is executed by a processor, the scene data management method provided by this embodiment is implemented. To avoid repeated description, the scene data will not be described in detail here. Specific steps of management methods.
  • Such software may be distributed on computer-readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes volatile and nonvolatile media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. removable, removable and non-removable media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, tapes, disk storage or other magnetic storage devices, or may Any other medium used to store desired information and that can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Bioethics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

公开了一种场景数据的管理方法、装置、电子设备及可读介质,属于计算机技术领域。该方法包括:通过场景编辑界面中包含的各个元素绘制入口绘制对应的场景元素;通过场景编辑界面中的私有数据访问入口获取私有存储空间中存储的用户私有数据;通过场景编辑界面中的公有数据访问入口获取公有存储空间中存储的系统公有数据;基于用户私有数据以及系统公有数据,编辑已绘制的各个场景元素,得到场景模型;响应于接收到的场景保存指令,生成场景描述文件,以提供给场景解析器进行解析并加载。该方式通过将用户私有数据以及系统公有数据分开存储,便于针对不同用户设置不同数据访问权限,达到隔离和保护用户私有数据的目的。

Description

场景数据的管理方法、装置、电子设备及可读介质 技术领域
本公开属于计算机技术领域,具体涉及一种会议数据的传输方法、装置、系统、电子设备及可读介质。
背景技术
地理信息系统(Geographic Information System,GIS)又称为“地学信息系统”,属于一种特定的空间信息系统。该系统用于在计算机硬、软件系统支持下,对整个或部分地球表层(包括大气层)空间中的有关地理分布数据进行采集、储存、管理、运算、分析、显示和描述。
例如,在城市、交通、环保等专业领域系统中,常常需要使用地理信息数据进行展示和数据分析,为了满足上述场景的应用需求,需要针对每个场景逐一进行单独开发,费时费力。
由此可见,在现有的地理信息系统中开发的场景无法保存并复用,由此增加了场景管理成本,不利于场景的快速创建和编辑。由此可见,在现有的地理信息系统中开发的场景无法保存并复用,由此增加了场景管理成本,不利于场景的快速创建和编辑。
发明内容
本公开旨在提供一种场景数据的管理方法、装置、电子设备及可读介质。
本公开第一方面提供一种场景数据的管理方法,其包括:
通过场景编辑界面中包含的各个元素绘制入口绘制对应的场景元素,所述场景编辑界面中包含多个用于绘制不同类型的场景元素的元素绘制入口;
通过所述场景编辑界面中的私有数据访问入口获取私有存储空间中 存储的用户私有数据;通过所述场景编辑界面中的公有数据访问入口获取公有存储空间中存储的系统公有数据;
基于获取到的用户私有数据以及系统公有数据,编辑已绘制的各个场景元素,得到包含多个场景元素的场景模型;
响应于接收到的场景保存指令,获取所述场景模型中包含的各个场景元素的元素属性信息,基于预设规范生成与所述各个场景元素的元素属性信息相对应的场景描述文件,将所述场景描述文件存储在所述私有存储空间中,以提供给场景解析器进行解析并加载。
本公开第二方面提供一种场景数据的管理方法,其包括:
响应于接收到的场景加载请求,获取与所述场景加载请求相对应的场景描述文件;
基于预设规范解析所述场景描述文件,得到与所述场景描述文件相对应的场景模型中包含的各个场景元素的元素属性信息;
基于所述各个场景元素的元素属性信息,加载并展示所述场景模型。
本公开第三方面提供一种场景数据的管理装置,其包括:
绘制模块,被配置为通过场景编辑界面中包含的各个元素绘制入口绘制对应的场景元素,所述场景编辑界面中包含多个用于绘制不同类型的场景元素的元素绘制入口;
获取模块,被配置为通过所述场景编辑界面中的私有数据访问入口获取私有存储空间中存储的用户私有数据;通过所述场景编辑界面中的公有数据访问入口获取公有存储空间中存储的系统公有数据;
编辑模块,被配置为基于获取到的用户私有数据以及系统公有数据,编辑已绘制的各个场景元素,得到包含多个场景元素的场景模型;
生成模块,被配置为响应于接收到的场景保存指令,获取所述场景模型中包含的各个场景元素的元素属性信息,基于预设规范生成与所述各个场景元素的元素属性信息相对应的场景描述文件,将所述场景描述 文件存储在所述私有存储空间中,以提供给场景解析器进行解析并加载。
本公开第四方面提供一种场景数据的管理装置,其包括:
获取模块,被配置为响应于接收到的场景加载请求,获取与所述场景加载请求相对应的场景描述文件;
解析模块,被配置为基于预设规范解析所述场景描述文件,得到与所述场景描述文件相对应的场景模型中包含的各个场景元素的元素属性信息;
加载模块,被配置为基于所述各个场景元素的元素属性信息,加载并展示所述场景模型。
本公开第五方面提供一种电子设备,包括:
一个或多个处理器;
存储器,其上存储有一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现上述任意一项所述的方法;
一个或多个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
本公开第六方面提供一种计算机可读介质,其上存储有计算机程序,所述程序被处理器执行时实现上述任意一项所述的方法。
附图说明
附图是用来提供对本公开的进一步理解,并且构成说明书的一部分,与下面的具体实施方式一起用于解释本公开,但并不构成对本公开的限制。在附图中:
图1为本公开一个实施例提供的场景数据的管理方法的流程图;
图2为本公开又一个实施例提供的场景数据的管理方法的流程图;
图3为本公开另一个实施例提供的场景数据的管理方法的流程图;
图4为步骤S330的具体实现方式的流程图;
图5示出了本公开又一实施例提供一种场景数据的管理装置的结构示意图;
图6示出了本公开再一实施例提供一种场景数据的管理装置的结构示意图;
图7为本公开实施例提供的一种电子设备的原理框图。
具体实施方式
为使本领域技术人员更好地理解本公开/实用新型的技术方案,下面结合附图和具体实施方式对本公开/实用新型作进一步详细描述。
除非另外定义,本公开使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“一个”、“一”或者“该”等类似词语也不表示数量限制,而是表示存在至少一个。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
第一方面,本公开实施例提供一种场景数据的管理方法,该管理方法基于GIS系统实现场景模型的灵活创建过程,并能够实现场景模型的快速保存操作。
如图1所示,本公开实施例提供的场景数据的管理方法,可以应用于GIS系统。该方法包括:
步骤S110:通过场景编辑界面中包含的各个元素绘制入口绘制对应 的场景元素,场景编辑界面中包含多个用于绘制不同类型的场景元素的元素绘制入口。
其中,场景编辑界面用于针对已创建的场景进行编辑,在场景编辑界面中,包含多个用于绘制不同类型的场景元素的元素绘制入口,通过不同的元素绘制入口,能够绘制对应类型的场景元素。
其中,GIS场景中包含多种类型的场景元素,由各种类型的场景元素共同构成一个完整的GIS场景。例如,场景元素包括:基础图层、地图数据、3D模型、矢量图形等。本发明不限定场景元素的具体种类和数量,凡是能够用于构成GIS场景的模块或单元均可作为场景元素。
步骤S120:通过场景编辑界面中的私有数据访问入口获取私有存储空间中存储的用户私有数据;通过场景编辑界面中的公有数据访问入口获取公有存储空间中存储的系统公有数据。
其中,用户私有数据是指:归属于当前登录用户的个人数据。其中,各个登录用户的用户私有数据用于供本用户使用,其他用户通常无权限访问。通过设置用户私有数据,便于灵活管理各个用户的个性化数据。通过用户私有数据可存储用户的个性化数据,如用户工作单位内部的设施类数据、用户家庭附近的建筑数据等。并且,用户私有数据存储在私有存储空间中,只能根据对应用户的用户标识进行访问,其他用户无访问权限。
在GIS系统中,除存储有用户预先上传的用户私有数据之外,还设置有可供各个登录用户共享的系统公有数据,与用户私有数据的权限访问方式不同,系统公有数据可供所有用户访问。通过系统公有数据可存储地图绘制过程中的常规数据,如公共道路、公共的建筑物等公有化内容。其中,系统公有数据存储在公有存储空间中,各个用户都能够通过公有数据访问接口进行访问。
步骤S130:基于获取到的用户私有数据以及系统公有数据,编辑已绘制的各个场景元素,得到包含多个场景元素的场景模型。
其中,各个场景元素包含多种信息,例如,以3D模型类型的场景元素为例,为了便于准确描述3D模型的各个部分,需要加载对应的模型资源,其中,不同的模型资源可能分别存储于私有存储空间以及公有存储空间中,相应的,基于获取到的用户私有数据以及系统公有数据,编辑已绘制的各个场景元素,得到由各个场景元素构成的场景模型。
步骤S140:响应于接收到的场景保存指令,获取场景模型中包含的各个场景元素的元素属性信息,基于预设规范生成与各个场景元素的元素属性信息相对应的场景描述文件,将场景描述文件存储在私有存储空间中,以提供给场景解析器进行解析并加载。
其中,场景保存指令可以为云端保存指令或本地保存指令等多种类型。预设规范用于定义场景描述文件的数据格式以及存储方式。在本公开中,预先定义了用于生成场景描述文件的预设规范,通过该预设规范能够统一存储各种类型的场景内容。其中,场景描述文件中存储了场景模型中包含的各个场景元素的元素属性信息。
相应的,通过场景解析器即可解析并加载场景描述文件,从而快速还原用户配置的场景模型。
由此可见,该方式通过将用户私有数据以及系统公有数据分开存储,便于针对不同用户设置不同的数据访问权限,达到隔离和保护用户私有数据的目的。在加载场景数据时,只加载场景相关的用户私有数据和视口内的系统公有数据,提升了数据复用度,降低了数据加载量,提升了场景渲染速度。并且,通过预先设定预设规范,能够将场景模型存储为规范化的场景描述文件,从而便于通过场景解析器快速解析并还原对应的场景,以实现场景管理操作,进而实现场景的复用,降低了场景管理成本。
第二方面,本公开实施例提供一种场景数据的管理方法,该管理方法基于GIS系统或其他业务系统实现场景描述文件的解析及加载过程,并能够实现场景模型的快速加载。
如图2所示,本公开实施例提供的场景数据的管理方法,可以应用于GIS系统或其他业务系统。该方法包括:
步骤S210:响应于接收到的场景加载请求,获取与场景加载请求相对应的场景描述文件。
其中,该场景加载请求可通过GIS系统中的场景加载入口触发,或者,也可以通过业务系统中的场景加载入口触发。其中,业务系统能够与GIS系统通信,用于借助地图数据实现指定的业务功能。
场景描述文件可通过图1所示的实施例中的方式生成。其中,场景描述文件既可以为本地存储的文件,也可以为云端存储的文件,具体可通过场景加载请求中包含的场景标识进行获取。
步骤S220:基于预设规范解析场景描述文件,得到与场景描述文件相对应的场景模型中包含的各个场景元素的元素属性信息。
其中,预设规范用于定义场景描述文件的数据格式以及存储方式。在本公开中,预先定义了用于生成场景描述文件的预设规范,通过该预设规范能够统一存储各种类型的场景元素的元素属性信息。相应的,通过场景解析器即可解析并加载场景描述文件,从而快速还原用户配置的场景模型。该场景解析器可设置在GIS系统或业务系统中,以实现场景描述文件的解析及加载功能。
步骤S230:基于各个场景元素的元素属性信息,加载并展示所述场景模型。
为了便于用户针对场景模型进行浏览及二次编辑等管理操作,在本步骤中将加载得到的场景模型展示在系统界面中。
由此可见,该方式通过预先设定预设规范,能够通过场景解析器快速解析场景描述文件,从而还原对应的场景模型,以便于对场景模型进行修改、编辑等管理操作,实现了场景的复用,降低了场景管理成本。
第三方面,本公开实施例提供一种场景数据的管理方法,以实现场 景描述文件的生成、解析及加载过程,并能够实现场景模型的快速加载。
如图3所示,本公开实施例提供的场景数据的管理方法,可以应用于GIS系统或其他业务系统。其中,图3所示的实施例旨在针对图1和图2所示的实施例进行更细化的描述。该方法包括:
步骤S310:接收并存储向GIS系统中上传的用户私有数据。
在本实施例中,对地理信息数据进行分层管理,所谓分层管理是指:将用户私有数据以及系统公有数据分别存储在不同的数据存储空间,以实现不同访问权限的管理。例如,将用户私有数据存储至私有存储空间,将系统公有数据存储至系统公有空间。另外,为了防止用户私有数据被其他用户恶意访问,在本实施例中,私有存储空间进一步包括:多个分别对应于不同用户的用户私有空间,其中,不同用户的用户私有空间相互隔离,每个用户私有空间只能通过对应的用户标识进行访问,其他用户无访问权限。
其中,在GIS系统的系统界面中包含私有数据上传入口,通过该私有数据上传入口能够向GIS系统中上传用户私有数据,该用户私有数据也可以称作素材数据,具体可存储在与用户账号相对应的用户私有空间内。例如,响应于接收到的私有数据上传请求,获取私有数据上传请求中包含的用户标识以及用户私有数据;将用户私有数据存储至私有存储空间中与用户标识相对应的用户私有空间。其中,私有数据上传请求通过上述的私有数据上传入口触发。
在一种可选的实现方式中,用户私有数据为用户素材,具体存储在素材管理库中,并通过素材管理入口上传。首先,将GIS系统部署到用户机器上,具体利用安装包部署,也可使用镜像部署。然后,在GIS系统中注册用户。最后,在素材管理库中上传用户私有数据,以备后续步骤使用。其中,素材可以是图片、3D模型、json数据文件等。
例如,在GIS系统的系统界面中设置有“素材管理”入口,用户点击该入口即可上传素材,并设置素材类型。
步骤S320:响应于接收到的场景创建请求,生成场景标识,并创建场景框架数据。
该步骤主要用于实现场景创建操作,以便创建一个对应于场景标识的用户场景。该场景通常为用户自定义场景,创建后存储在用户的用户私有空间内。
其中,场景创建请求可通过系统界面中设置的“场景管理”入口触发,响应于接收到的场景创建请求,在系统界面中显示场景创建界面,该场景创建界面中包含场景名称设置入口以及场景简介设置入口,用于设置场景的辅助描述信息,以便通过辅助描述信息快速从多个已创建的场景中筛选可用的场景。
另外,在本步骤中,基于场景创建请求,生成场景标识,并创建场景框架数据。其中,场景框架数据用于实现场景的框架搭建,以展示场景的大体轮廓。其中,可预先设置多种分别对应于不同类型场景的场景框架数据,从而根据场景创建请求中包含的场景类型,创建对应于该场景类型的场景框架数据。
步骤S330:响应于接收到的场景编辑请求,基于用户私有数据以及GIS系统中的系统公有数据,绘制与场景编辑请求相对应的场景模型。
在GIS系统中,除存储有用户预先上传的用户私有数据之外,还设置有可供各个登录用户共享的系统公有数据,与用户私有数据的权限访问方式不同,系统公有数据可供所有用户访问。通过系统公有数据可存储地图绘制过程中的常规数据,如公共道路、公共的建筑物等公有化内容。相应的,场景模型基于用户私有数据以及系统公有数据生成。
其中,场景编辑请求用于实现针对已创建场景的编辑操作,具体包括:用于进入场景编辑界面的启动编辑请求、用于绘制不同类型的场景元素的元素绘制请求,用于编辑已绘制的各个元素的元素编辑请求,以及用于针对场景元素配置事件的事件配置请求。相应的,步骤S330进一步包括图4所示的各个子步骤。如图4所示,步骤S330包括如下子步骤:
子步骤S331:通过场景编辑界面中包含的各个元素绘制入口绘制对应的场景元素,场景编辑界面中包含多个用于绘制不同类型的场景元素的元素绘制入口。
其中,响应于接收到的启动编辑请求,展示上述的场景编辑界面。响应于通过场景编辑界面中包含的元素绘制入口触发的元素绘制请求,获取该元素绘制请求中包含的元素类型标识,绘制与该元素类型标识相对应的场景元素。其中,基于场景框架数据绘制场景元素。
在一种示例中,场景元素包括下述九种类型中的至少一种:基础图层、地图数据、路网、矢量图形、3D模型、数据可视化、事件、特效、视图。相应的,用于绘制不同类型的场景元素的元素绘制入口包括:用于绘制基础图层类型的场景元素的第一元素绘制入口(基础图层绘制入口)、用于绘制地图数据类型的场景元素的第二元素绘制入口(地图数据绘制入口)、用于绘制路网类型的场景元素的第三元素绘制入口(路网绘制入口)、用于绘制矢量图形类型的场景元素的第四元素绘制入口(矢量图形绘制入口)、用于绘制3D模型类型的场景元素的第五元素绘制入口(3D模型绘制入口)、用于绘制数据可视化类型的场景元素的第六元素绘制入口(数据可视化绘制入口)、用于事件类型的场景元素的第七元素绘制入口(事件绘制入口)、用于绘制特效类型的场景元素的第八元素绘制入口(特效绘制入口)、用于绘制视图类型的场景元素的第九元素绘制入口(视图绘制入口)。在该示例中,预先设置了九种类型的场景元素,并在场景编辑界面中提供入口列表区域,该入口列表区域中展示有各个元素绘制入口,用户可根据待绘制的场景元素的类型选择对应的绘制入口。并且,在场景编辑界面中还包含场景图像绘制区域,该区域用于实时展示由用户绘制的各个场景元素所构成的场景模型。通过预先设置多种场景元素,并分别针对各种场景元素设置对应的绘制入口,能够以场景元素为基本单位灵活绘制各类复杂场景。其中,GIS场景由多种不同类型的场景元素构成。
子步骤S332:通过场景编辑界面中的私有数据访问入口获取私有存储空间中存储的用户私有数据;通过场景编辑界面中的公有数据访问入口获取公有存储空间中存储的系统公有数据。
其中,场景元素中通常还需要包含对应的元素资源数据,例如,以3D模型类型的场景元素为例,还需要针对该3D模型添加对应的元素资源数据,该元素资源数据包括:文件类资源数据、视频类资源数据、图片类资源数据等。
为了便于针对场景元素配置对应的元素资源数据,需要访问上文提到的用户私有数据以及系统公有数据。由于两种数据隔离存储,因此,在访问时,需要分别通过前端界面中的不同访问入口,以指向不同的存储空间,从而获取对应的数据。
例如,响应于通过场景编辑界面中的私有数据访问入口触发的私有数据访问请求,获取该私有数据访问请求中包含的当前用户的用户标识,并调用私有数据访问接口,访问与该用户标识相对应的用户私有空间,以获取用户私有数据。又如,响应于通过场景编辑界面中的公有数据访问入口触发的公有数据访问请求,调用公有数据访问接口,访问公有存储空间,以获取系统公有数据。
子步骤S333:基于获取到的用户私有数据以及系统公有数据,编辑已绘制的各个场景元素,得到包含多个场景元素的场景模型。
其中,由于场景元素中包含不同种类的元素资源数据,不同的元素资源数据可能分别由用户私有数据以及系统公有数据提供,因此,在本步骤中,基于获取到的用户私有数据以及系统公有数据,编辑已绘制的各个场景元素,从而得到包含多个场景元素的场景模型。由此可见,场景元素是根据用户私有数据以及系统公有数据进行编辑的,场景模型由多个场景元素构成。
子步骤S334:响应于接收到的事件配置请求,确定与事件配置请求相对应的场景元素,针对场景元素配置与事件配置请求中包含的事件类 型相匹配的触发事件。
其中,子步骤S334为一个可选的步骤,在其他实施例中,也可以省略子步骤S334。
通过子步骤S334,能够为场景模型添加响应事件,该响应事件通常针对场景模型中的一个或多个场景元素触发。其中,事件配置请求可通过上文提到的事件绘制入口触发,也可以通过其他方式触发,本发明对此不做限定。其中,针对场景元素配置事件接口,将与触发事件相对应的事件关联数据与事件接口进行绑定,以通过事件接口配置触发事件的事件类型。由此可见,在本实施例中,针对特定的场景元素配置有事件接口,并且,将事件接口与事件关联数据相绑定。其中,事件关联数据是指:与触发事件有关的各类数据,如触发事件启动后的监控数据、触发事件启动后续报警处理时的报警数据、报警策略等。其中,事件类型包括:监控类型以及预测类型,相应的,与触发事件相对应的事件关联数据包括以下中的至少一种:触发事件的监控对象数据、监控策略数据以及事件响应结果数据。例如,当事件类型为监控类型时,在针对场景元素配置与事件配置请求中包含的事件类型相匹配的触发事件时,通过事件接口配置触发事件的下述信息:监控对象、监控策略以及事件响应结果。该事件接口用于设置触发事件的事件属性信息,包括以下中的至少一种:触发事件的监控对象、监控策略以及事件响应结果。
其中,本实施例中的场景模型可应用于数字孪生城市这一应用场景,通过数字孪生城市,能够将真实世界中的建筑物、车辆、水系等景物,以虚拟现实方式呈现在三维虚拟空间中,制作出数字孪生的场景,也叫虚拟世界,从而把每一个城市的三维模型的定位属性、地址和位置信息渲染出来。
在一种可选的实现方式中,针对场景模型中包含的3D模型这一场景对象配置触发事件,以实现火灾监测功能。比如,针对3D模型中的消防系统配置触发事件,该触发事件的事件类型为监控类型,事件关联 数据包括:监控对象数据(例如,消防系统的环境温度值,具体通过温度监测器获取监测到的数值)、监控策略数据(例如,监控策略为每隔10秒获取一次温度监测器的监测数值,并将监测数值与预设的火灾温度阈值进行比较,判断监测数值是否大于预设的火灾温度阈值)、事件响应结果数据(例如,若监测数值大于预设的火灾温度阈值,则控制3D模型的展示方式发生变化,如改变3D模型的颜色,或在3D模型中展示火焰元素,以起到警示作用)。
在又一种可选的实现方式中,针对场景模型中包含的3D模型这一场景对象配置触发事件,以实现机器寿命的预测功能。比如,针对3D模型中的工厂机器配置触发事件,该触发事件的事件类型为预测类型,事件关联数据包括:监控对象数据(例如,机器的使用时长)、监控策略数据(例如,监控策略为机器的使用时长与预设的寿命阈值进行比较,判断使用时长是否大于预设的寿命阈值)、事件响应结果数据(例如,若使用时长与寿命阈值之间的差值的绝对值小于预设值,则改变机器的颜色,以起到警示作用)。另外,在预测过程中,还可以进一步结合预先训练的深度学习模型,获取与机器相关联的多个参数,从而结合多个参数的参数值更加准确的预测机器寿命。
又如,还可以针对雨量监测器这一场景元素添加响应事件,并通过事件接口配置监控事件的监控对象为降雨量,监控策略为每隔5分钟获取一次降雨量监测结果,事件响应结果为当降雨量监测结果大于预设值时进行预警。
在一个具体示例中,利用GIS系统提供的场景编辑器,对场景模型进行可视化编辑,包括绘制地物、道路、添加3D模型、可视化数据、特效、配置视图、编辑关键帧等。
步骤S340:响应于接收到的场景保存指令,基于预设规范生成与场景模型相对应的场景描述文件;其中,场景描述文件用于提供给场景解析器进行解析并加载。
在接收到场景保存指令的情况下,获取场景模型中包含的各个场景元素的元素属性信息,基于预设规范生成与各个场景元素的元素属性信息相对应的场景描述文件,将场景描述文件存储在用户私有空间中,以提供给场景解析器进行解析并加载。
其中,场景保存指令可以为云端保存指令或本地保存指令等多种类型。首先,分别确定场景模型中包含的各个场景元素的元素属性信息,基于预设规范生成用于描述各个元素属性信息的元素描述数据。然后,根据各个元素属性信息的元素描述数据生成场景描述文件。其中,元素属性信息包括以下中的至少一种:元素标识符、元素类型、元素尺寸、元素方位、元素扩展数据以及元素加载方式。预设规范用于定义场景描述文件的格式,具体用于定义场景元素的元素属性信息与场景描述文件中的描述规范之间的映射关系。
在第一种实现方式中,场景保存指令包括:云端保存指令,相应的,通过云端保存入口将场景描述文件与场景标识关联存储到云端数据库中。例如,用户点击保存按钮,由GIS系统对当前场景进行分析,根据场景配置规范(即预设规范),生成场景描述文件中的场景描述数据,并保存到云端数据库的用户私有空间中。
在第二种实现方式中,场景保存指令包括:本地保存指令,相应的,通过本地保存入口将场景描述文件导出并存储至本地数据库的用户私有空间中。例如,用户点击导出按钮,由GIS系统根据场景配置规范,生成场景描述数据,并导出为json文件。
步骤S350:响应于接收到的场景加载请求,获取与场景加载请求相对应的场景描述文件。
在一种实现方式中,场景加载请求包括:云端加载请求,相应的,获取场景加载请求中包含的场景标识,基于场景标识从云端数据库中获取与场景标识相对应的场景描述文件。
在又一种实现方式中,场景加载请求包括:本地加载请求,相应的, 根据场景加载请求中包含的场景标识,获取存储在本地的与所述场景标识相对应的场景描述文件。
步骤S360:基于预设规范解析场景描述文件,得到与场景描述文件相对应的场景模型,加载并展示场景模型。
其中,基于预设规范解析场景描述文件,得到与场景描述文件相对应的场景模型中包含的各个场景元素的元素属性信息;基于各个场景元素的元素属性信息,加载并展示场景模型。例如,获取场景描述文件中包含的元素描述数据,基于预设规范确定与元素描述数据相对应的各个场景元素的元素属性信息;其中,元素属性信息包括以下中的至少一种:元素标识符、元素类型、元素尺寸、元素方位、元素扩展数据以及元素加载方式。另外,元素描述数据用于遵照预设规范所定义的格式,对场景元素的元素属性信息进行存储。相应的,场景描述文件存储在用户私有空间,由元素描述数据构成,且元素描述数据遵循上述的预设规范。
由此可见,按照预设规范对场景元素的元素属性信息进行描述,以得到元素描述数据,由元素描述数据构成场景描述文件。
在一种实现方式中,在加载并展示场景模型时,获取场景模型中包含的各个场景元素,将配置有触发事件的场景元素确定为目标场景元素,生成与目标场景元素相对应的事件类型相匹配的触发事件。具体的,确定针对目标场景元素配置的事件接口,获取与事件接口绑定的事件关联数据,根据事件关联数据生成触发事件。其中,事件类型包括:监控类型以及预测类型,则与事件接口绑定的事件关联数据包括以下中的至少一种:触发事件的监控对象数据、监控策略数据以及事件响应结果数据。当场景加载请求为云端加载请求时(对应于场景保存指令为云端保存指令的情况),在用户使用场景时,利用嵌入式框架(如web iframe)将场景解析器添加到业务系统中,并在URL中指定场景标识,解析器将根据场景标识从GIS系统中获取场景描述数据并解析,最终展示在业务页面中。
当场景加载请求为本地加载请求时(对应于场景保存指令为本地保存指令的情况),在用户使用离线数据时,利用嵌入式框架将场景解析器包含到业务系统中,并利用postMessage等通信机制,将json文件数据发送给iframe框架中的解析器,解析器对根据场景描述数据进行解析,最终展示在业务页面中。
其中,在一种实现方式中,步骤S350至步骤S370由GIS系统执行。在又一种实现方式中,步骤S350至步骤S370由能够与GIS系统相互通信的业务系统执行。并且,在后一种方式中,通过业务系统中设置的场景解析器解析场景描述文件,得到与场景描述文件相对应的场景模型。另外,借助业务系统,能够针对加载得到的场景模型进行监控、查询等附加功能。例如,若元素属性信息包括元素扩展数据,则可以在业务系统中查询并展示各个场景元素的元素扩展数据,以便为用户提供参考。其中,元素扩展数据包括:场景元素的辅助描述信息。
其中,基于云端存储的加载方式能够通过云端数据库统一管理用户的场景模型,有利于降低用户侧设备的管理成本。基于本地存储的加载方式使场景模型不经过云端直接存储在本地,对于一些安全性较高的数据而言,能够避免传输过程中被恶意截获的风险。上述两种存储方式可基于业务场景的安全性要求灵活设置。
上述实施例可基于WebGL(即Web图形库)实现,其中,WebGL是一个JavaScript API,可在任何兼容的Web浏览器中渲染高性能的交互式3D或2D图形,而无需使用插件。WebGL通过引入一个与OpenGL ES 2.0非常一致的API来做到这一点,该API可以在HTML5<canvas>元素中使用。该特性使API可以利用用户设备提供的硬件图形加速功能,从而实现模型绘制速度的提升。
上述实施例能够解决目前针对相似需求需要多次开发所存在的不足与缺陷,提供一种能够描述地理信息场景的管理系统,具有流程清晰、省时、节力、维护方便、快速更新等特点,能够有效提高工作效率,满 足各类地理信息数据的展示和计算分析的需求。
在本实施例中,利用地理信息数据分层模型实现了地理信息数据的分层管理。其中,将实际场景中的地理信息数据分为系统公有数据(即公开数据)、用户私有数据(例如用户的静态数据)、场景特定数据(即场景模型对应的场景描述文件)三类。其中,系统公有数据由GIS系统提供;用户私有数据由用户上传,只能用户自己访问;在各类使用场景中,基于系统公有数据和用户私有数据能够产生场景特定数据,也就是符合地理信息场景描述语言规范的场景数据。
下面针对一种具体示例中的数据分层模型中的各层存储的数据类型进行详细介绍:
数据分层模型的一个数据层为用户私有数据层,该层存储以下类型的数据:地图管理数据、路网数据、地形数据、城市建筑数据、水系数据、矢量数据、区域划分数据、室内地图数据、POI数据、素材数据、数据接口、BIM、CIM、设备数据、控制接口。
数据分层模型的又一个数据层为系统公有数据层,也叫基础数据及能力层,该层存储以下类型的数据:地图基础数据、三维地形数据、地图视角操作、地图图层、地图控件、3D瓦片托管工作流、在线地图图层、矢量数据支持、事件引擎、模型支持、GeoJSON支持、POI兴趣点、地图对象、数据可视化、视频融合、场景漫游、路径规划、地下管网、城市白模、模型单体化、场景背景天空盒、天气特效、空间分析计算、BIM模型分析以及虚拟仿真。
相应的,基于用户私有数据层以及系统公有数据层,并结合时空数据配置描述,即可得到上文提到的场景特定数据(即场景模型所对应的场景描述文件)。场景特定数据具体包括以下类型的数据:地图参数配置、覆盖物属性(可自定义)、事件触发、数据绑定、可视化数据、3D模型、交互元素、导航、视角、故事、图表、动作、特效、素材引用。
本实施例中,通过预设规范,也叫地理信息场景描述语言规范实现 场景描述文件的生成和解析操作,此规范将地理信息场景抽象为基础图层、用户地图、路网、矢量图形、视图、3D模型、事件、特效、数据可视化等九种数据类型,采取json对象的方式组织起来。每种数据类型为一个列表,其中包括数据元素,可以对应到地图上相应的地物对象,并使用json对象方式描述其配置。后期如有需求,可将这九种数据类型扩展,以支撑更多场景。在用户使用GIS系统的可视化场景编辑器将场景数据保存或导出为文件后,形成符合此规范的场景数据,可被GIS系统的场景解析器读取,还原出地理场景进行展示。
例如:描述地面上的一个三维物体,可能的配置为:




总之,在本实施例描述的场景数据生成流程中,基于地理信息数据分层模型和场景描述语言规范,由GIS系统根据用户配置出的视觉场景,生成场景描述数据,用于展示场景之用。具体流程包括以下操作:对地理信息数据进行分层管理,对用户用到的GIS数据进行统一抽象描述,形成地理信息场景描述语言规范。利用行业云GIS系统可视化编辑地理信息,生成用户GIS场景描述数据并存储。待使用时,利用场景数据标识获取到场景数据或使用离线文件导入场景数据,从而将场景数据输入GIS系统中还原场景。
通过本实施例中的方案,至少具有如下有益效果:首先,使用数据 分层模型可有效管理地理信息数据,隔离用户私有数据,减少场景数据量,提升加载速度;其次,将静态地图数据托管在云端,达到分散加载和按需加载,提升加载和运行速度,优化用户体验;再次,对GIS场景进行规范描述,生成配置化的数据,并利用GIS系统统一管理,避免了每个场景都需要开发的弊端,提升工作效率;最后,提供了轻量化的云端场景管理方案和适用于离线环境的场景文件管理方案,同时适用于在线和离线两种场景、多种终端,适应性更强,可满足各类使用场景。另外,提供了用于描述地理信息场景的统一规范,使各类场景数据规范化,更易管理。
并且,该方式通过配置事件的方式,能够实现监测或预测等多种功能,该方案能够将GIS系统中的场景模型加载到业务系统中,并结合业务系统的特点,实现监测或预测功能,而且,能够通过元素扩展数据的方式在业务系统中查询元素的辅助描述信息,从而将业务系统中所需的内容以元素扩展数据的方式添加到GIS系统的场景模型中,从而为业务系统的使用提供便利。其中,业务系统可以为上文提到的数字孪生城市系统,也可以为人口迁移预测系统等各类需要与GIS模型相配合的系统。
图5示出了本公开又一实施例的提供一种场景数据的管理装置的结构示意图,如图5所示,该管理装置包括:
绘制模块51,被配置为通过场景编辑界面中包含的各个元素绘制入口绘制对应的场景元素,所述场景编辑界面中包含多个用于绘制不同类型的场景元素的元素绘制入口;
获取模块52,被配置为通过所述场景编辑界面中的私有数据访问入口获取私有存储空间中存储的用户私有数据;通过所述场景编辑界面中的公有数据访问入口获取公有存储空间中存储的系统公有数据;
编辑模块53,被配置为基于获取到的用户私有数据以及系统公有数据,编辑已绘制的各个场景元素,得到包含多个场景元素的场景模型;
生成模块54,被配置为响应于接收到的场景保存指令,获取所述场 景模型中包含的各个场景元素的元素属性信息,基于预设规范生成与所述各个场景元素的元素属性信息相对应的场景描述文件,将所述场景描述文件存储在所述私有存储空间中,以提供给场景解析器进行解析并加载。
可选的,所述私有存储空间进一步包括:多个分别对应于不同用户的用户私有空间;
所述获取模块具体被配置为:根据用户标识确定所述私有存储空间中与当前用户相对应的用户私有空间,从所述与当前用户相对应的用户私有空间中获取用户私有数据;
并且,所述装置还包括:
上传模块,被配置为响应于接收到的私有数据上传请求,获取所述私有数据上传请求中包含的用户标识以及用户私有数据;将所述用户私有数据存储至所述私有存储空间中与所述用户标识相对应的用户私有空间。
可选的,所述装置还包括:
配置模块,被配置为响应于接收到的事件配置请求,确定与所述事件配置请求相对应的场景元素,针对所述场景元素配置与所述事件配置请求中包含的事件类型相匹配的触发事件。
可选的,所述配置模块具体被配置为:
针对所述场景元素配置事件接口,将与所述触发事件相对应的事件关联数据与所述事件接口进行绑定,以通过所述事件接口配置所述触发事件的事件类型。
可选的,所述事件类型包括:监控类型、以及预测类型;则所述与所述触发事件相对应的事件关联数据包括以下中的至少一种:所述触发事件的监控对象数据、监控策略数据以及事件响应结果数据。
可选的,所述生成模块具体被配置为:
分别确定所述场景模型中包含的各个场景元素的元素属性信息,基于所述预设规范,生成与所述场景元素的元素属性信息相对应的元素描述数据,根据所述元素描述数据生成所述场景描述文件;
其中,所述元素属性信息包括以下中的至少一种:元素标识符、元素类型、元素尺寸、元素方位、元素扩展数据以及元素加载方式。
可选的,所述场景保存指令包括:云端保存指令,则所述生成模块具体被配置为:
通过云端保存入口将所述场景描述文件与场景标识关联存储到云端数据库的用户私有空间中。
可选的,所述场景保存指令包括:本地保存指令,则所述所述生成模块具体被配置为:通过本地保存入口将所述场景描述文件导出并存储至本地数据库的用户私有空间中。
图6示出了本公开又一实施例的提供一种场景数据的管理装置的结构示意图,如图6所示,一种场景数据的管理装置,其包括:
获取模块61,被配置为响应于接收到的场景加载请求,获取与所述场景加载请求相对应的场景描述文件;
解析模块62,被配置为基于预设规范解析所述场景描述文件,得到与所述场景描述文件相对应的场景模型中包含的各个场景元素的元素属性信息;
加载模块63,被配置为基于所述各个场景元素的元素属性信息,加载并展示所述场景模型。
可选的,所述加载模块具体被配置为:
获取所述场景模型中包含的各个场景元素,将配置有触发事件的场景元素确定为目标场景元素,生成与所述目标场景元素相对应的事件类型相匹配的触发事件。
可选的,所述加载模块具体被配置为:
确定针对所述目标场景元素配置的事件接口,获取与所述事件接口绑定的事件关联数据,根据所述事件关联数据生成所述触发事件。
可选的,所述事件类型包括:监控类型、以及预测类型,则所述与所述事件接口绑定的事件关联数据包括以下中的至少一种:所述触发事件的监控对象数据、监控策略数据以及事件响应结果数据。
可选的,所述解析模块具体被配置为:
获取所述场景描述文件中包含的元素描述数据,基于预设规范确定与所述元素描述数据相对应的各个场景元素的元素属性信息;
其中,所述元素属性信息包括以下中的至少一种:元素标识符、元素类型、元素尺寸、元素方位、元素扩展数据以及元素加载方式。
上述各个模块的具体结构和工作原理可参照方法实施例中对应步骤的描述,此处不再赘述。
另外,图5所示的管理装置可以集成在GIS系统中,图6所示的管理装置既可以集成在GIS系统中,也可以集成在业务系统中。另外,图5和图6中的管理装置也可以集成为同一个管理装置,共同设置在GIS系统中。
参照图7,本公开实施例提供一种电子设备,其包括:
一个或多个处理器901;
存储器902,其上存储有一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现上述任意一项的场景数据的管理方法;
一个或多个I/O接口903,连接在处理器与存储器之间,配置为实现处理器与存储器的信息交互。
其中,处理器901为具有数据处理能力的器件,其包括但不限于中央处理器(CPU)等;存储器902为具有数据存储能力的器件,其包括但不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读 存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH);I/O接口(读写接口)903连接在处理器901与存储器902间,能实现处理器901与存储器902的信息交互,其包括但不限于数据总线(Bus)等。
在一些实施例中,处理器901、存储器902和I/O接口903通过总线相互连接,进而与计算设备的其它组件连接。
本实施例还提供一种计算机可读介质,其上存储有计算机程序,程序被处理器执行时实现本实施例提供的场景数据的管理方法,为避免重复描述,在此不再赘述场景数据的管理方法的具体步骤。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其它数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其它存储器技术、CD-ROM、数字多功能盘(DVD)或其它光盘存储、磁盒、磁带、磁盘存储或其它磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其它的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其它传输机制之类的调制数据信号中的其它数据,并且可包括任何信息递送介质。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
可以理解的是,以上实施方式仅仅是为了说明本公开/实用新型的原理而采用的示例性实施方式,然而本公开/实用新型并不局限于此。对于本领域内的普通技术人员而言,在不脱离本公开/实用新型的精神和实质的情况下,可以做出各种变型和改进,这些变型和改进也视为本公开/实用新型的保护范围。

Claims (21)

  1. 一种场景数据的管理方法,其包括:
    通过场景编辑界面中包含的各个元素绘制入口绘制对应的场景元素,所述场景编辑界面中包含多个用于绘制不同类型的场景元素的元素绘制入口;
    通过所述场景编辑界面中的私有数据访问入口获取私有存储空间中存储的用户私有数据;通过所述场景编辑界面中的公有数据访问入口获取公有存储空间中存储的系统公有数据;
    基于获取到的用户私有数据以及系统公有数据,编辑已绘制的各个场景元素,得到包含多个场景元素的场景模型;
    响应于接收到的场景保存指令,获取所述场景模型中包含的各个场景元素的元素属性信息,基于预设规范生成与所述各个场景元素的元素属性信息相对应的场景描述文件,将所述场景描述文件存储在所述私有存储空间中,以提供给场景解析器进行解析并加载。
  2. 根据权利要求1所述的方法,其中,所述私有存储空间进一步包括:多个分别对应于不同用户的用户私有空间;
    所述通过所述场景编辑界面中的私有数据访问入口获取私有存储空间中存储的用户私有数据包括:根据用户标识确定所述私有存储空间中与当前用户相对应的用户私有空间,从所述与当前用户相对应的用户私有空间中获取用户私有数据;
    并且,所述通过场景编辑界面中包含的各个元素绘制入口绘制对应的场景元素之前,还包括:响应于接收到的私有数据上传请求,获取所述私有数据上传请求中包含的用户标识以及用户私有数据;将所述用户私有数据存储至所述私有存储空间中与所述用户标识相对应的用户私有空间。
  3. 根据权利要求1所述的方法,其中,所述通过场景编辑界面中包 含的各个元素绘制入口绘制对应的场景元素之后,还包括:
    响应于接收到的事件配置请求,确定与所述事件配置请求相对应的场景元素,针对所述场景元素配置与所述事件配置请求中包含的事件类型相匹配的触发事件。
  4. 根据权利要求3所述的方法,其中,所述针对所述场景元素配置与所述事件配置请求中包含的事件类型相匹配的触发事件包括:
    针对所述场景元素配置事件接口,将与所述触发事件相对应的事件关联数据与所述事件接口进行绑定,以通过所述事件接口配置所述触发事件的事件类型。
  5. 根据权利要求4所述的方法,其中,所述事件类型包括:监控类型、以及预测类型;则所述与所述触发事件相对应的事件关联数据包括以下中的至少一种:所述触发事件的监控对象数据、监控策略数据以及事件响应结果数据。
  6. 根据权利要求1所述的方法,其中,所述通过场景编辑界面中包含的各个元素绘制入口绘制对应的场景元素之前,还包括:响应于接收到的场景创建请求,生成场景标识,并创建场景框架数据;
    则所述通过场景编辑界面中包含的各个元素绘制入口绘制对应的场景元素包括:基于所述场景框架数据绘制所述场景元素。
  7. 根据权利要求6所述的方法,其中,所述响应于接收到的场景保存指令,获取所述场景模型中包含的各个场景元素的元素属性信息,基于预设规范生成与所述各个场景元素的元素属性信息相对应的场景描述文件包括:
    分别确定所述场景模型中包含的各个场景元素的元素属性信息,基于所述预设规范,生成与所述场景元素的元素属性信息相对应的元素描述数据,根据所述元素描述数据生成所述场景描述文件;
    其中,所述元素属性信息包括以下中的至少一种:元素标识符、元 素类型、元素尺寸、元素方位、元素扩展数据以及元素加载方式。
  8. 根据权利要求7所述的方法,其中,所述场景保存指令包括:云端保存指令,则所述将所述场景描述文件存储在所述私有存储空间中包括:
    通过云端保存入口将所述场景描述文件与场景标识关联存储到云端数据库的用户私有空间中。
  9. 根据权利要求7所述的方法,其中,所述场景保存指令包括:本地保存指令,则所述将所述场景描述文件存储在所述私有存储空间中包括:通过本地保存入口将所述场景描述文件导出并存储至本地数据库的用户私有空间中。
  10. 一种场景数据的管理方法,其包括:
    响应于接收到的场景加载请求,获取与所述场景加载请求相对应的场景描述文件;
    基于预设规范解析所述场景描述文件,得到与所述场景描述文件相对应的场景模型中包含的各个场景元素的元素属性信息;
    基于所述各个场景元素的元素属性信息,加载并展示所述场景模型。
  11. 根据权利要求10所述的方法,其中,所述加载并展示所述场景模型包括:
    获取所述场景模型中包含的各个场景元素,将配置有触发事件的场景元素确定为目标场景元素,生成与所述目标场景元素相对应的事件类型相匹配的触发事件。
  12. 根据权利要求11所述的方法,其中,所述将配置有触发事件的场景元素确定为目标场景元素,生成与所述目标场景元素相对应的事件类型相匹配的触发事件包括:
    确定针对所述目标场景元素配置的事件接口,获取与所述事件接口绑定的事件关联数据,根据所述事件关联数据生成所述触发事件。
  13. 根据权利要求12所述的方法,其中,所述事件类型包括:监控类型、以及预测类型,则所述与所述事件接口绑定的事件关联数据包括以下中的至少一种:所述触发事件的监控对象数据、监控策略数据以及事件响应结果数据。
  14. 根据权利要求10所述的方法,其中,所述基于预设规范解析所述场景描述文件,得到与所述场景描述文件相对应的场景模型中包含的各个场景元素的元素属性信息包括:
    获取所述场景描述文件中包含的元素描述数据,基于预设规范确定与所述元素描述数据相对应的各个场景元素的元素属性信息;
    其中,所述元素属性信息包括以下中的至少一种:元素标识符、元素类型、元素尺寸、元素方位、元素扩展数据以及元素加载方式。
  15. 根据权利要求10所述的方法,其中,所述场景加载请求包括:云端加载请求,则所述响应于接收到的场景加载请求,获取与所述场景加载请求相对应的场景描述文件包括:获取所述场景加载请求中包含的场景标识,基于所述场景标识从云端数据库中获取与所述场景标识相对应的场景描述文件。
  16. 根据权利要求10所述的方法,其中,所述场景加载请求包括:本地加载请求,则所述响应于接收到的场景加载请求,获取与所述场景加载请求相对应的场景描述文件包括:根据所述场景加载请求中包含的场景标识,获取存储在本地的与所述场景标识相对应的场景描述文件。
  17. 根据权利要求10-16任一所述的方法,其中,所述方法由GIS系统执行,或者,所述方法由能够与GIS系统相互通信的业务系统执行;
    并且,所述基于预设规范解析所述场景描述文件,得到与所述场景描述文件相对应的场景模型中包含的各个场景元素的元素属性信息包括:
    通过所述业务系统中设置的场景解析器解析所述场景描述文件,得到与所述场景描述文件相对应的场景模型。
  18. 一种场景数据的管理装置,其包括:
    绘制模块,被配置为通过场景编辑界面中包含的各个元素绘制入口绘制对应的场景元素,所述场景编辑界面中包含多个用于绘制不同类型的场景元素的元素绘制入口;
    获取模块,被配置为通过所述场景编辑界面中的私有数据访问入口获取私有存储空间中存储的用户私有数据;通过所述场景编辑界面中的公有数据访问入口获取公有存储空间中存储的系统公有数据;
    编辑模块,被配置为基于获取到的用户私有数据以及系统公有数据,编辑已绘制的各个场景元素,得到包含多个场景元素的场景模型;
    生成模块,被配置为响应于接收到的场景保存指令,获取所述场景模型中包含的各个场景元素的元素属性信息,基于预设规范生成与所述各个场景元素的元素属性信息相对应的场景描述文件,将所述场景描述文件存储在所述私有存储空间中,以提供给场景解析器进行解析并加载。
  19. 一种场景数据的管理装置,其包括:
    获取模块,被配置为响应于接收到的场景加载请求,获取与所述场景加载请求相对应的场景描述文件;
    解析模块,被配置为基于预设规范解析所述场景描述文件,得到与所述场景描述文件相对应的场景模型中包含的各个场景元素的元素属性信息;
    加载模块,被配置为基于所述各个场景元素的元素属性信息,加载并展示所述场景模型。
  20. 一种电子设备,其包括:
    一个或多个处理器;
    存储装置,其上存储有一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现根据权利要求1-9或权利要求10-17中任意一项所述的方法;
    一个或多个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
  21. 一种计算机可读介质,其上存储有计算机程序,所述程序被处理器执行时实现根据权利要求1-9或权利要求10-17中任意一项所述的方法。
PCT/CN2023/077101 2022-03-28 2023-02-20 场景数据的管理方法、装置、电子设备及可读介质 WO2023185315A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210311572.6A CN114721737A (zh) 2022-03-28 2022-03-28 场景数据的管理方法、装置、电子设备及可读介质
CN202210311572.6 2022-03-28

Publications (2)

Publication Number Publication Date
WO2023185315A1 true WO2023185315A1 (zh) 2023-10-05
WO2023185315A9 WO2023185315A9 (zh) 2023-11-23

Family

ID=82238806

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/077101 WO2023185315A1 (zh) 2022-03-28 2023-02-20 场景数据的管理方法、装置、电子设备及可读介质

Country Status (2)

Country Link
CN (1) CN114721737A (zh)
WO (1) WO2023185315A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114721737A (zh) * 2022-03-28 2022-07-08 京东方科技集团股份有限公司 场景数据的管理方法、装置、电子设备及可读介质
CN115373764B (zh) * 2022-10-27 2022-12-27 中诚华隆计算机技术有限公司 一种容器自动加载方法和装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005165873A (ja) * 2003-12-04 2005-06-23 Masahiro Ito Web3D画像表示システム
CN109767368A (zh) * 2019-01-16 2019-05-17 南京交通职业技术学院 一种基于WebGL技术的虚拟化学实验教学平台
CN109861948A (zh) * 2017-11-30 2019-06-07 腾讯科技(成都)有限公司 虚拟现实数据处理方法、装置、存储介质和计算机设备
CN110377858A (zh) * 2019-07-08 2019-10-25 紫光云技术有限公司 一种可视化拖拽生成动态表单页面的系统及方法
CN111798544A (zh) * 2020-07-07 2020-10-20 江西科骏实业有限公司 可视化vr内容编辑系统及使用方法
CN114721737A (zh) * 2022-03-28 2022-07-08 京东方科技集团股份有限公司 场景数据的管理方法、装置、电子设备及可读介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005165873A (ja) * 2003-12-04 2005-06-23 Masahiro Ito Web3D画像表示システム
CN109861948A (zh) * 2017-11-30 2019-06-07 腾讯科技(成都)有限公司 虚拟现实数据处理方法、装置、存储介质和计算机设备
CN109767368A (zh) * 2019-01-16 2019-05-17 南京交通职业技术学院 一种基于WebGL技术的虚拟化学实验教学平台
CN110377858A (zh) * 2019-07-08 2019-10-25 紫光云技术有限公司 一种可视化拖拽生成动态表单页面的系统及方法
CN111798544A (zh) * 2020-07-07 2020-10-20 江西科骏实业有限公司 可视化vr内容编辑系统及使用方法
CN114721737A (zh) * 2022-03-28 2022-07-08 京东方科技集团股份有限公司 场景数据的管理方法、装置、电子设备及可读介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHEN, DONGLIN: "Design and Realization of Substation Visualization Platform", FUJIAN COMPUTER, FU JIAN DIAN NAO BIAN JI BU, CN, no. 10, 31 October 2015 (2015-10-31), CN , pages 120, XP009549098, ISSN: 1673-2782, DOI: 10.16707/j.cnki.fjpc.2015.10.064 *
QU ZHAO-YANG; HOU SONG-LIN; ZHANG YU-PING; ZHANG JIAN-HONG; XIN PEN: "Realization of Substation Visualization Training Platform", JOURNAL OF NORTHEAST DIANLI UNIVERSITY, vol. 34, no. 3, 30 June 2014 (2014-06-30), pages 75 - 79, XP009549097, ISSN: 1005-2992 *

Also Published As

Publication number Publication date
WO2023185315A9 (zh) 2023-11-23
CN114721737A (zh) 2022-07-08

Similar Documents

Publication Publication Date Title
WO2023185315A9 (zh) 场景数据的管理方法、装置、电子设备及可读介质
US7535473B2 (en) Collaborative environments in a graphical information system
US20190371055A1 (en) 3d monitoring server using 3d bim object model and 3d monitoring system comprising it
US20070067106A1 (en) Streaming geometry using quasi-pyramidal structure
WO2021169274A1 (zh) 一种历史地图数据的处理的方法、装置和系统
CN114637026A (zh) 一种基于三维仿真技术实现输电线路在线监测与智能巡检的方法
KR20140075760A (ko) 집성 특징부 식별자를 이용한 지도 요소들 관리
TW202004674A (zh) 在3d模型上展示豐富文字的方法、裝置及設備
Coelho et al. Expeditious Modelling of Virtual Urban Environments with Geospatial L‐systems
CN112287010B (zh) 基于安卓系统的地图服务提供方法、装置、终端及存储介质
CN112381946A (zh) 数字场景查看方法、装置、存储介质和计算机设备
CN113419806B (zh) 图像处理方法、装置、计算机设备和存储介质
Bi et al. Research on CIM basic platform construction
CN112907740B (zh) 基于gis的不动产可视化展示方法及系统
CN114090713A (zh) 基于增强现实的服务提供方法、系统及电子设备
CN111723170A (zh) 一种基于CesiumJS的移动端离线三维GIS应用实现方法和系统
CN112632338A (zh) 一种点云数据检索方法、装置、设备及存储介质
CN113297652A (zh) 施工图的生成方法、装置及设备
CN116302579B (zh) 面向Web端的时空大数据高效加载渲染方法及系统
CN114373055B (zh) 基于bim的三维图像生成方法、装置、电子设备及介质
CN115796449B (zh) 一种智慧城市门户搭建方法、系统、电子设备及介质
CN118012428A (zh) 煤矿数据的处理方法、装置和计算机程序产品
Samad et al. Web GIS solution and 3D visualization towards sustainability of Georgetown as world heritage site
CN114119890A (zh) 一种多维展示的电力数字地图模型建立方法及装置
CN117407951A (zh) Bim模型处理方法、服务器及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23777694

Country of ref document: EP

Kind code of ref document: A1