CN117891448A - Visual component editing method, system, equipment and medium for constructing page - Google Patents

Visual component editing method, system, equipment and medium for constructing page Download PDF

Info

Publication number
CN117891448A
CN117891448A CN202311796307.2A CN202311796307A CN117891448A CN 117891448 A CN117891448 A CN 117891448A CN 202311796307 A CN202311796307 A CN 202311796307A CN 117891448 A CN117891448 A CN 117891448A
Authority
CN
China
Prior art keywords
component
page
visual
layer
editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311796307.2A
Other languages
Chinese (zh)
Other versions
CN117891448B (en
Inventor
魏俊浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Julong Technology Co ltd
Original Assignee
Guangzhou Julong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Julong Technology Co ltd filed Critical Guangzhou Julong Technology Co ltd
Priority to CN202311796307.2A priority Critical patent/CN117891448B/en
Publication of CN117891448A publication Critical patent/CN117891448A/en
Application granted granted Critical
Publication of CN117891448B publication Critical patent/CN117891448B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/34Graphical or visual programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/36Software reuse
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A visual component editing method, system, equipment and medium for constructing pages relates to the technical field of computers. The method comprises the following steps: acquiring scene construction information of a target area, wherein the scene construction information comprises static scene information; creating an initial canvas page, and generating a plurality of layers in the initial canvas page according to the static scene information, wherein each layer comprises an original visual component, and the original visual component is provided with an editable range; responding to editing operation of an operator, and adjusting the original visual components in each layer in the editable range to generate target visual components in each layer; and outputting the initial canvas page containing the target visualization component to a data large screen as a visualization page. The effect of improving the construction efficiency of the data visualization page is achieved.

Description

Visual component editing method, system, equipment and medium for constructing page
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, a system, an apparatus, and a medium for editing a visual component for constructing a page.
Background
With the development of the internet and the acceleration of digital transformation, the concept of smart cities is becoming a reality. In this context, the operation and management of cities is being deeply transformed, wherein the data visualization page serves as a window of the smart city, playing an important role in integrating huge data, displaying in real time and supporting fast decision.
Currently, conventional methods of creating data visualization pages typically rely on operators writing large amounts of code to achieve integration and visualization of the data. However, in practical application, due to rapid increase of urban data volume, operators are often required to have complex programming capability, and the whole data visualization page is complicated in creation process and long in time consumption, so that the efficiency of constructing the data visualization page is low.
Disclosure of Invention
The application provides a visual component editing method, a visual component editing system, visual component editing equipment and a visual component editing medium for constructing pages, which have the effect of improving the construction efficiency of data visual pages.
In a first aspect, the present application provides a method for editing a visualization component for building a page, including:
acquiring scene construction information of a target area, wherein the scene construction information comprises static scene information;
Creating an initial canvas page, and generating a plurality of layers in the initial canvas page according to the static scene information, wherein each layer comprises an original visual component, and the original visual component is provided with an editable range;
responding to editing operation of an operator, and adjusting the original visual components in each layer in the editable range to generate target visual components in each layer;
and outputting the initial canvas page containing the target visualization component to a data large screen as a visualization page.
By adopting the technical scheme, the scene construction information of the target area is acquired, wherein the scene construction information comprises static scene information, and a plurality of layers are generated in the created initial canvas page according to the static scene information, wherein each layer comprises an original visual component with an editable range, so that the initial construction of the visual page based on the scene of the target area is realized. On the basis, the editing operation of the original visual components in each layer by an operator is responded, namely, the original visual components are adjusted within the editable range of the components, so that the target visual components in each layer are generated, and the customized adjustment of the original visual components is realized, so that the target components meeting the requirements are generated. Finally, the initial canvas page containing the customized target visual component is output to the data large screen as a visual page, so that the visual page containing the user-defined visual effect is presented. In summary, according to the technical scheme, the interactive visual page generation process aiming at specific scenes and customized requirements is realized by acquiring scene information, responding to user interaction to edit and adjust components and setting the editable range of the components, so that the generated visual page meets the requirements of scene construction and the requirements of users on visual effects of the page, and the construction efficiency of the visual page is improved.
Optionally, acquiring the size and resolution of a visual window in the data large screen; the initial canvas page is generated according to the size and the resolution of the visual window.
By adopting the technical scheme, the size and resolution parameters of the visual window used for displaying the visual content in the data large screen are acquired, and the initial canvas page with the size and resolution completely matched with the visual window is generated according to the acquired parameters of the visual window, so that the visual components added and edited in the canvas page can be accurately matched with the display effect and resolution of the data large screen, the condition that the display and layout of the components are inconsistent with the display of the large screen is avoided, and the display quality of the components in the canvas page is improved. By acquiring the visual window parameters and generating the matched parameterized initial canvas page according to the parameters, the space coordinates and control of the display effect of the data large screen which are accurately matched are provided for the subsequent visual component layout and content display, the display suitability of the visual component is enhanced, the output visual page can be displayed in the data large screen without errors, and the visual display quality of the system is improved.
Optionally, the first layer is created according to the urban map information, and the first layer is set at a preset bottom layer position in the initial canvas page; creating the second layer according to the traffic flow information, and setting the second layer at a preset covering layer position in the initial canvas page; and creating the third layer according to the environment monitoring information, and setting the third layer at a preset top layer position in the initial canvas page.
By adopting the technical scheme, corresponding layers are respectively created in the canvas page according to different types of static scene information, and the hierarchical relationship of each layer is preset, so that the hierarchical design of multiple layers of the canvas page is realized. The map layer at the bottom layer plays a role of a basic background of scene information; the traffic map layer is covered on the map layer and represents traffic flow information; the monitoring layer of the top layer is covered on the flow layer and represents environment monitoring information. The hierarchical arrangement of multiple layers constructs a hierarchical structure and a logic relation of the visual effect of the page, so that different types of scene information visualization components have clear and reasonable display layout. According to the scheme, through the preset design of the multiple layers of the canvas page, the logic of the layout of the visual components is enhanced, and the visual effect of the visual page is more hierarchical and relevant.
Optionally, according to a preset visual component library, the original visual components corresponding to the urban map information, the traffic flow information and the environment monitoring information are respectively matched; according to the first layer position, the second layer position, the third layer position and the component positions of the original visual components in each layer, respectively matching the editable position ranges of the original visual components in each layer; and determining the component type of each original visual component, and respectively matching the editable size range of the original visual component in each layer according to the component type and the page size of the initial canvas page.
By adopting the technical scheme, the unified visual component library is preset, so that the quick matching acquisition of different types of scene information and corresponding visual components is realized, and the standardization and the efficiency of component acquisition are improved. According to the position parameters of the components and the layers and the size parameters of the canvas page, the editable position range and the editable size range of the original visual components in each layer are respectively configured, so that reasonable constraint on the position movement and the size scaling of the components is realized, the editing flexibility of the components is ensured, and the problem of out-of-range of the components is avoided. According to the scheme, automation of parameterized configuration of the original visual components is realized by matching the component library and configuring the editable range, standardized generation and intelligent suitability of the components are enhanced, and quality and efficiency of subsequent visual page generation are facilitated.
Optionally, acquiring editing instructions corresponding to the component dragging, the component scaling, the component color setting and the component font proportion setting; according to the editing instruction, the original visual components in each layer are adjusted to obtain each visual component; and if the component position of each visual component does not exceed the editable position range and the component size of each visual component does not exceed the editable size range, taking each visual component as the target visual component in each layer.
By adopting the technical scheme, the instruction parameters corresponding to the interactive editing operation are obtained, and the adjustment of the position, the size, the style and other parameters of the original visual component is automatically executed according to the parameterized editing instructions, so that the automatic execution of the man-machine interactive editing process is realized. And setting a verification mechanism after component parameter adjustment, and outputting the edited component as a target component only when the position and the size of the component are within a preset editable range, so that the compliance of the component editing process is ensured. According to the scheme, through parameterized editing instructions and result verification, the process automation of interactive editing is realized, the editing efficiency is improved, the correctness of an editing result is ensured, and the user-defined visual assembly is effectively supported to be generated quickly.
Optionally, calling configuration data corresponding to the dynamic scene information according to a preset API; binding the configuration data according to the component names of the target visual components in each layer to obtain target components in each layer; and generating a visual file according to each target component and the initial canvas page, and synchronizing the visual file to the data large screen to generate the visual page.
By adopting the technical scheme, the configuration data of the dynamic scene is acquired by calling the preset API, and the corresponding relation between the components and the dynamic scene data is established. The binding of the configuration data is a real-time data source on the association of the components, and supports the subsequent dynamic visual expression. And then, the canvas page containing the target component is used for generating a visual file according to a unified format, and the visual file is effectively transmitted and synchronized to a data large screen, so that the automatic and efficient presentation of the visual page of the dynamic scene is realized. The scheme realizes the seamless connection of the visual assembly and the dynamic data, organically combines the construction of the visual page with the data binding, supports the automatic transmission and presentation of the visual page containing the dynamic data, and improves the intelligent capability of the system.
Optionally, responding to the page calling operation of the operator, and acquiring the position of the calling object in the target area; and determining a target layer page according to the position, and synchronizing the target layer page and the visual page to the data display screen as a combined page.
By adopting the technical scheme, the position of the visual page object called by the operator is acquired, and the canvas page of the layer where the object is located is determined as the navigation target layer page. And then, seamlessly synthesizing the target layer page and the current visual page to generate a combined scene page, thereby realizing seamless navigation positioning of visual page objects. The scheme supports interactive navigation skip of the visual page objects, realizes continuous browsing of the objects in a plurality of related visual pages, and enhances the extensibility and relevance of visual scenes. The method and the device improve the interactivity of the visual page, realize the accurate navigation and positioning of the visual object, and provide smooth and barrier-free visual scene experience for the user.
In a second aspect of the present application, a visualization component editing system for building pages is provided.
The information acquisition module is used for acquiring scene construction information of the target area;
the assembly matching module is used for creating an initial canvas page and determining a plurality of layers in the initial canvas page and editing parameters of a first visual assembly in each layer according to the scene construction information;
the assembly editing module is used for responding to the editing operation of an operator based on the editing parameters to obtain second visual assemblies in each layer;
and the page generation module is used for generating a visual page according to the second visual component and the initial canvas page in each layer.
In a third aspect of the present application, an electronic device is provided.
A visual component editing system for building pages comprises a memory, a processor and a program stored on the memory and capable of running on the processor, wherein the program can be loaded and executed by the processor to realize a visual component editing method for building pages.
In a fourth aspect of the present application, a computer-readable storage medium is provided.
A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement a method of visual component editing for building pages.
In summary, one or more of the technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages
By adopting the technical scheme, the scene construction information of the target area is acquired, wherein the scene construction information comprises static scene information, and a plurality of layers are generated in the created initial canvas page according to the static scene information, wherein each layer comprises the original visual component with the editable range, so that the initial construction of the visual page based on the scene of the target area is realized. On the basis, the editing operation of the original visual components in each layer by an operator is responded, namely, the original visual components are adjusted within the editable range of the components, so that the target visual components in each layer are generated, and the customized adjustment of the original visual components is realized, so that the target components meeting the requirements are generated. Finally, the initial canvas page containing the customized target visual component is output to the data large screen as a visual page, so that the visual page containing the user-defined visual effect is presented. In summary, according to the technical scheme, the interactive visual page generation process aiming at specific scenes and customized requirements is realized by acquiring scene information, responding to user interaction to edit and adjust components and setting the editable range of the components, so that the generated visual page meets the requirements of scene construction and the requirements of users on visual effects of the page, and the construction efficiency of the visual page is improved.
Drawings
FIG. 1 is a flow diagram of a method for editing a visual component for building a page according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a visual component editing system for building pages as disclosed in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to the disclosure in an embodiment of the present application.
Reference numerals illustrate: 300. an electronic device; 301. a processor; 302. a communication bus; 303. a user interface; 304. a network interface; 305. a memory.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments.
In the description of embodiments of the present application, words such as "for example" or "for example" are used to indicate examples, illustrations or descriptions. Any embodiment or design described herein as "such as" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "or" for example "is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, the term "plurality" means two or more. For example, a plurality of systems means two or more systems, and a plurality of screen terminals means two or more screen terminals. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating an indicated technical feature. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The embodiment of the application provides an intelligent shore power management method. In one embodiment, please refer to fig. 1, fig. 1 is a flowchart illustrating a method for editing a visualization component for building a page according to an embodiment of the present application, where the method may be implemented by a computer program, and the computer program may be integrated into an application or may be run as a separate tool class application. The method can be realized by depending on a singlechip, and can also be operated in a visual component editing system for constructing pages based on a von neumann system. Specifically, the method may include the steps of:
Step 101: and acquiring scene construction information of the target area, wherein the scene construction information comprises static scene information.
Wherein, the scene construction information refers to content data required for constructing the data visualization page. Information about space, environment, traffic and the like of a target area can be understood in the embodiment of the application, and the information is used for supporting component design, visual presentation and dynamic data binding of a page.
The scene construction information includes two parts: static scene information refers to information content of fixed or slow change of spatial position, topography, environmental facilities and the like of a target area, such as a map, roads, building distribution, monitoring station positions and the like. This part of the information is used for the initial construction of the page.
Dynamic scene information refers to real-time change data in a target area, such as environmental monitoring data, traffic flow, and the like. The information is used for dynamic data binding of the page, and real-time updating of the page is realized.
Specifically, the scene construction information of the target area is acquired in order to acquire the content data base required for page construction. The target area may be a city area, campus area, building area, etc. where the data page needs to be presented. The acquisition of scene construction information requires determination of a target area, and then information is acquired by various information acquisition devices associated with the area. For example, for a city area, city map information, environment information, traffic information, and the like may be acquired as static scene information from a city planning system, an environment monitoring system, a traffic management system, and the like. The acquired static scene information may include map contours and labels, key area identification, environmental monitoring station distribution, road traffic network, and the like. After the static scene information is acquired, the page can be initially planned and designed according to the static scene information. The space range and the label of the page can be determined according to the map information; the environmental information can determine the distribution of the displayed monitoring points; the traffic information may be designed into a road network presentation style. The method can obtain sufficient, accurate and proper static scene information, and can strongly support the performance of the subsequent page of component design, content display, visual layout and other works. Meanwhile, a foundation is laid for the subsequent real-time binding with dynamic data.
Step 102: and creating an initial canvas page, and generating a plurality of layers in the initial canvas page according to the static scene information, wherein each layer comprises an original visual component, and the original visual component is provided with an editable range.
Wherein the initial canvas page refers to a blank page container for carrying the visualization component. In the embodiment of the application, a blank canvas is understood to be used for carrying various visual components added later.
The original visualization component refers to the initial visualization element used for page construction. In the embodiment of the application, the initial map, flow and other component resources selected from the component library are understood to be used for building the basic display content of the page.
Specifically, the initial canvas page is created to obtain a blank page container for subsequent addition and editing of the visualization component. The canvas page is sized and resolved at creation time to accommodate the final visual display device. After the static scene information is acquired, multi-layer design can be performed on the initial canvas page. Each layer is used to carry a different type of visualization component.
For example, the bottom layer may place map components, the middle layer may place flow components, the top layer may place monitoring status components, etc. The multi-layer design facilitates free editing and assembly of components. From the context information, the type and content of visualization components required for each layer may be determined. And then searching matched original visual components, such as map components, flow components and the like, from a preset component library, and adding the matched original visual components into the corresponding layers. These components are provided with editable ranges, including a position range and a size range, to facilitate subsequent editing operations such as dragging, zooming, and the like. By creating an initial canvas page and a multi-layer design and adding an editable original visual component according to the initial canvas page and the multi-layer design, a basic platform and content materials for page editing are formed. The method lays a frame-like support for subsequent assembly editing and page generation, and is a key foundation for realizing visual visualization of the page.
Based on the above embodiment, as an alternative embodiment, in step 102: creating an initial canvas page, which may further comprise the steps of:
step 201: the size and resolution of the visual window in the data large screen are obtained.
Wherein, the visual window refers to an actual area for displaying visual contents in the data large screen. In this embodiment, it may be understood that the display area for playing the visual page is planned in the data large screen, so as to directly determine the display effect of the visual page.
Specifically, the size and resolution of the visual window in the data large screen are obtained in order to determine the size of the initial canvas page according to the display effect parameters of the large screen. The visual window is the actual area in the data large screen for displaying visual content. The size and resolution directly determine the display effect of the page. The size of the visual window is obtained, including the length, width and display shape parameters of the window. The acquisition resolution refers to the number of pixels in the acquisition window, i.e., the values of the length pixels and the width pixels. The color parameters of the screen, etc. may be further acquired. These parameters may be obtained by querying the specifications of the display device or may be obtained by direct detection by a software tool. After these parameters are obtained, the exact size and resolution of the initial canvas page can be set accordingly to fully accommodate the display effect of the data large screen. The size and resolution parameters of the visual window are acquired, so that the initial canvas page can be correctly matched with the display capability of the large screen, and the condition of content distortion or white retention is avoided, thereby providing a display basis for subsequent page addition and editing.
Step 202: an initial canvas page is generated based on the size and resolution of the visual window.
Specifically, after the size and resolution parameters of the visual window are obtained, a matching initial canvas page needs to be generated therefrom. When the canvas page is generated, a blank page container needs to be newly built, and parameters such as the length, the width, the resolution and the like of the page container are set so as to be accurately matched with the size and the resolution of the visual window. For example, the visual window is 16:9, and the resolution is 1920 x 1080, and correspondingly generating a canvas page with the same size proportion and resolution. The canvas page is newly established in a programming mode, a related graphical interface function can be called, and canvas with completely consistent size and resolution is generated according to the acquired visual window parameters. A custom resolution blank page may also be created in the visual page editor tool. The canvas page and the visual window parameters are enabled to be highly consistent, so that the subsequently added components and contents can be ensured to be accurately matched with the display capacity of the large screen, and the condition that page display is inconsistent with the window is avoided. The precisely matched canvas provides precise spatial coordinates and control for subsequent component layout and content presentation.
Based on the above embodiment, as an alternative embodiment, in step 102: generating a plurality of layers in the initial canvas page according to the static scene information, which can further comprise the following steps:
step 203: and creating a first layer according to the urban map information, and setting the first layer at a preset bottom layer position in the initial canvas page.
Specifically, after obtaining the city map information, a first layer needs to be created on the initial canvas page for placement and presentation of the map components. The map information can define the range and style of the map, so that the map area and the map type to be displayed can be determined. Depending on the map area and type, a matching layer, i.e., the first layer, may be created. The layer is dedicated to carrying the presentation of the map component. The position of the first layer in the canvas page needs to be preset as the bottom layer. Because the map component plays a fundamental role of scene information, it needs to be placed at the bottom of the page, and other components are placed above the map component. The hierarchical order of the first layer is set at the bottom, so that the first layer can be a basic background layer of the whole page. By creating a special map layer and presetting the bottom layer display of the special map layer, the map component can become the basic display content of the page, and the effect of supporting the visual effect of the whole page is achieved. The display of other functional components is built on the basis of the map layer, so that a reasonable and visual page visual hierarchy can be constructed.
Step 204: and creating a second layer according to the traffic flow information, and setting the second layer at a preset covering layer position in the initial canvas page.
Specifically, after the traffic flow information is obtained, a second layer needs to be created on the initial canvas page for carrying and displaying the flow components. The traffic flow information can clarify the flow change condition of the main road. A matching second layer may be created for placing flow changing visualization components such as dynamic flow maps, road condition icons, etc. The second layer needs to be preset at the overlay position of the canvas page. The overlay is based on the underlying map component. The flow component is placed at the level, so that the visual component of the flow information floats on the map and can be displayed interactively with the map. The flow components are borne by the specific second image layer, and the coverage layer is arranged, so that the flow information visualization component can be reasonably displayed on the map component and is in interactive association with the map to form an intermediate information layer of the page.
Step 205: and creating a third layer according to the environment monitoring information, and setting the third layer at a preset top layer position in the initial canvas page.
Specifically, after the environmental monitoring information is obtained, a third layer needs to be created on the initial canvas page for carrying and displaying the environmental monitoring component. The environment monitoring information can determine the distribution of the monitoring points and the monitoring data. A matching third layer may thus be created for placement of visual components of environmental monitoring, such as air quality icons, data curves, etc. The third layer needs to be preset at the top position of the canvas page. The top layer is overlaid on the basis of the map layer and the traffic layer. Placing the monitoring component at this level allows the visualization component of the monitoring information to float over the map and traffic, forming a relationship with both. The monitoring component is carried by the specific third image layer, and the top layer is arranged, so that the environment monitoring information visualization component can be reasonably displayed on the map and the flow component to interact with the map and the flow component, and the top information layer of the page is formed.
Based on the above embodiment, as an alternative embodiment, in step 102: generating a plurality of layers in the initial canvas page according to the static scene information, wherein the original visual component is provided with an editable range, and the method can further comprise the following steps:
Step 206: and respectively matching the original visual components corresponding to the urban map information, the traffic flow information and the environment monitoring information according to a preset visual component library.
Specifically, the original visualization component is matched to find a suitable initial component template based on the layer information. Various types of general components, such as map, flow, monitoring and the like, are collected in a preset visual component library. These components have a certain versatility. After the scene information is acquired, the matched original components need to be selected from the component library. For example, for urban map information, find relevant map components; for traffic flow information, a flow statistics component is found; for environmental monitoring information, a data reading component is found. During selection, whether the functional type, the presentation mode, the required data and other elements of the component match the information content of the layer or not needs to be comprehensively judged. And after the matched original component templates are selected, adding the template into the corresponding layers. By matching and selecting the original components from the preset component library, the corresponding visual templates can be quickly obtained according to the layer information, so that basic materials are provided for subsequent information presentation and component editing, and the working efficiency is improved.
Step 207: and respectively matching the editable position range of the original visual component in each layer according to the first layer position, the second layer position, the third layer position and the component position of the original visual component in each layer.
Specifically, the editable position range of the original visual component is set for subsequent position and size editing of the component. After the original components are added to each layer, there is a default initial position. The editable scope of the component needs to be set according to the specific position of the component in the layer, and the position of the layer in the canvas page. For example, the map component is in the lower left corner of the first layer, its editable range may be set to an area in the entire canvas page other than the upper right corner to allow for dragging of the position without exceeding the map range. The flow component is in the middle of the second layer, and its editable range can be set as a peripheral area with a certain range in the middle, allowing appropriate dragging. The environmental monitoring component can set a small location edit scope, etc. After setting the editable range, the component can only be moved or scaled in size within that range during subsequent editing. Therefore, the flexibility of the assembly is guaranteed, and the problem of out-of-range assembly is avoided. By setting the editable range of the component, the editing of the component is reasonably controlled, so that the editing flexibility is ensured, and the rationality of the layout is maintained.
Step 208: and determining the component type of each original visual component, and respectively matching the editable size range of the original visual component in each layer according to the component type and the page size of the initial canvas page.
Specifically, the editable size range of the original visual component is set for the subsequent size scaling of the component. The type of the original component is determined, and different types of components have different fit size ranges. For example, the map component can set a larger size range, allowing for flexible scaling to the adaptation window; and the monitoring point component needs to be provided with a smaller range to avoid distortion. And reasonably setting the size range of the components by combining the size of the canvas page. The maximum and minimum size settings of the components are adapted to the canvas space range, both ensuring adequate zoom variation and avoiding exceeding pages. After the editable size range is set, when the component is scaled in the editing process, the size can be adjusted only in the range, so that editing rationality is ensured, and distortion problem is also reduced. Through setting up the editable size scope, make the subassembly size edit have reasonable constraint, both guaranteed the flexibility of editing, also maintained the quality of bandwagon effect.
Step 103: and responding to the editing operation of an operator, adjusting the original visual components in each layer in an editable range, and generating the target visual components in each layer.
The editing operation refers to a parameterization operation for adjusting the original visual component. In the embodiment of the application, the operator can drag, scale and the like the position, the size, the style and the like of the component based on the requirement, so as to enable the component to have a personalized display effect.
Specifically, the operator can perform editing operations such as drag movement position and zoom change size on the component within its editable range. For example, the map component can drag and change the display area, zoom and adjust the display scale; the flow component can be moved to a proper position above the road layer and scaled to match the road size; the watch point component may drag to an area of interest, etc. On the premise of ensuring that the editable range is not exceeded, operators can freely adjust the position and the size of the assembly according to the requirements and habits, and personalized customization is carried out on the assembly. After editing, the original visual component is adjusted in the aspects of position, size, style and the like, so that the target customized visual component is generated, and specific page requirements can be matched. Through the man-machine interaction mode, the visual assembly needed by the user is efficiently generated, and the flexibility of the assembly is improved.
On the basis of the above embodiment, as an alternative embodiment, in step 103: in response to an editing operation of an operator, adjusting the original visual components in each layer in an editable range, and generating target visual components in each layer, the method further comprises the following steps:
step 301: and acquiring editing instructions corresponding to the component dragging, the component scaling, the component color setting and the component font proportion setting.
Specifically, the editing operation of the component includes dragging a position, scaling a size, setting a color, adjusting a font, and the like. When the operator performs these interactions, the operator responds to a series of parameterized edit commands, for example, dragging the map component to a specified coordinate location (x, y), magnifying the flow component to 1.5 times the size, setting the color of the monitoring component to red, and adjusting the title font to 25 pixels. These interactive editing actions of the operator are detected and acquired in real time and converted into corresponding instruction parameters for subsequent editing execution. The acquisition of editing instructions is the basis for parameterized execution of human-machine interactive editing. It provides a detailed operation basis for subsequent component adjustment.
Step 302: and adjusting the original visual components in each layer according to the editing instruction to obtain each visual component.
Specifically, after the operation instructions for editing the components are obtained, the original components need to be adjusted according to the instructions to generate target components. For example, when an instruction of "drag map component to coordinates (100, 80)" is acquired, the execution program changes the position parameter of the map component based on the instruction, and sets the position to the specified coordinates. And acquiring an instruction of 'amplifying the flow component to 1.5 times', and executing 1.5 times of amplifying operation on the size parameter of the flow component. The instructions of setting color, adjusting font and the like are obtained, and the operations of adjusting color parameters and font parameters are correspondingly executed. The editing instructions can be converted into certain parameterized operations on the components, and the server side can automatically execute the parameter adjustment without manually processing each step. By changing the component parameters according to the instructions, interactive editing results can be effectively and automatically realized, and target visual components required by users are generated.
Step 303: and if the component position of each visual component does not exceed the editable position range and the component size of each visual component does not exceed the editable size range, taking each visual component as a target visual component in each layer.
Specifically, after the original component is adjusted by executing the editing instruction, it is necessary to check whether the adjustment result is within the editable range. Editing may cause the position or size of the component to exceed a preset editable range, and at this time, a check is required to avoid out-of-bounds conditions in the final generated result. If the inspection finds that the edited component positions and sizes are within the editable range, the editing process is valid. The adjusted component is the target customized component required by the user and can be directly output to the corresponding layer. If the inspection finds that the component is out of range, then the re-execution instruction needs to be rolled back until the component parameter adjustment is within the editable range. The step ensures the compliance of the assembly editing process, the output target assembly can meet the control requirements of position and size, and the reliability of the assembly is improved.
Step 104: and outputting the initial canvas page containing the target visualization component to the data large screen as a visualization page.
Wherein the visualization page refers to a canvas page containing a plurality of visualization components. In the embodiment of the application, the initial canvas page containing the target component is used for presenting the visual content in the data large screen after the component editing.
The target visualization component refers to an editorially adjusted parameterized visualization component. In the embodiment of the application, the customized component for displaying the visual page, which is generated after the human-computer interaction editing, is used for matching specific page requirements and effects.
Specifically, after the original visualization components of each layer edit and generate the target components, the whole canvas page containing the components needs to be output to the data large screen. At this time, the visual contents of the map component, the flow component, the monitoring component and the like are already distributed on the canvas page according to the hierarchy. The whole canvas page can be used as a visual page, and comprises a user-edited and customized target component which can be matched with requirements. The visual page can be finally presented into a data large screen through format conversion and data transmission. The components in the visual page can be automatically matched and displayed in the visual window of the large screen according to the parameters of the canvas page. Therefore, the user terminal can see the visual page after editing and adjusting, and intuitively presents the effects of component layout, association relation and the like required by the user. Through outputting to the large screen, the one-time interactive editing process is completed, and the customized visual page is efficiently generated.
Based on the above embodiment, as an alternative embodiment, in step 104: outputting the initial canvas page containing the target visualization component as a visualization page to the data large screen, which may further comprise the steps of:
step 401: and calling configuration data corresponding to the dynamic scene information according to a preset API.
Wherein dynamic scene information refers to a set of related data describing a particular dynamic environment. In the embodiment of the application, the information about the urban road traffic environment in need is understood to be used for supporting the visual processing of the scene.
Specifically, the configuration data is invoked to obtain component information related to the dynamic scenario. The preset API provides an interface for obtaining configuration data. The API predefines the structure and calling mode of the configuration data according to the unified specification. Aiming at a dynamic scene needing to be visualized, configuration data of the dynamic scene comprises configuration information such as scene names, scene ranges, related data sources in the scene, required component types and the like. Configuration data of the current scene can be obtained by calling a read configuration interface in the API, wherein the configuration data comprises scene range definition, a data set participating in visualization, a component type required to be used and the like. After the configuration data is obtained, the detailed information of the target scene can be known, and a basis is provided for the follow-up scene data obtaining and the component editing. And calling a standardized API to acquire configuration data, so that scene information can be acquired quickly, and the uniformity and efficiency of visual editing are improved.
Step 402: and binding configuration data according to the component names of the target visual components in each layer to obtain the target components in each layer.
Wherein, the configuration data refers to the structured data for organizing the dynamic scene information. In the embodiment of the application, the visual configuration information such as scene range, data source, component type and the like organized according to preset specifications is used for guiding the visual processing of the dynamic scene.
Specifically, the binding of the configuration data to the target component is to establish a correspondence between the component and the scene data. The target component has determined the name of the component at the time of generation, reflecting the functional type of the component. The configuration data also contains the name of the component type that the scene needs to use. Components of the same type name are associated with the configuration, such as binding a target component named "map component" with a dataset in the configuration named "map". After binding, the target component obtains the corresponding scene data access entry. The component can call the scene dynamic data through the configured data set to realize visualization. Thus, through the binding of the components and the configuration, the link between the components and the scene data is established, and a foundation is laid for the subsequent visualization processing.
Step 403: and generating a visual file according to each target component and the initial canvas page, and synchronizing the visual file to the data large screen to generate the visual page.
Specifically, the edited and adjusted visual content is generated into a file and output to a data large screen. After binding the scene configuration data, the target component contains the component content and data connection required by visualization. The initial canvas page containing the target component is saved as a visual page file. The file conforms to a predetermined format specification so that it can be recognized by a data large screen. And transmitting the visual page file to a data large screen end in a network synchronization mode. After the large screen receives the file, the content of the file can be analyzed, and a visual page is automatically rendered in a visual window of the large screen according to the position, the style and other parameters of the components in the file. The components in the visual page can realize real-time visual presentation effect based on the dynamic scene data. By generating and outputting the standardized file, the visual content is transmitted to the display end, and automatic and efficient visual page presentation is realized.
Based on the above embodiment, as an alternative embodiment, in step 104: outputting the initial canvas page containing the target visualization component as a visualization page to the data large screen, which may further comprise the following steps after this step:
Step 501: and responding to the page calling operation of the operator, and acquiring the position of the calling object in the target area.
Specifically, the method comprises the steps of acquiring the position of a visual page object in a target area, which is called by an operator. An operator can select visual page objects to be called up for display on the canvas by using interaction modes such as mouse clicking, gesture selection and the like. Upon receiving the call, the coordinate location of the object in the current canvas area may be captured, e.g., the center location of the object is (x 1, y 1). Meanwhile, the size parameters of the visualized page object can be obtained, for example, the width and the height of the object are w1 and h1 respectively. Thus, the accurate positioning of the object in the current page area can be determined, namely, the target area where the object is located is a rectangular area with the (x 1, y 1) as the center and the ranges of w1 and h1. The target area information of the calling object is obtained, which is a precondition for object navigation and positioning. By capturing the coordinates of the object region, the correspondence between the object and the position can be established, and a basis is provided for subsequent page navigation customization.
Step 502: and determining a target layer page according to the position, and synchronizing the target layer page and the visual page as a combined page to a data display screen.
Specifically, after the object position is acquired, a target layer page for page navigation needs to be determined. According to the position of the object, the layer to which the object belongs can be judged, for example, the position (x 1, y 1) belongs to the layer A. And determining the canvas page where the layer A is positioned as a target layer page for continuous browsing of the object. The target layer page and the visualization page currently containing the object are merged into one combined page file. And combining the two pages according to the position relationship in the file to support seamless navigation of the object. The combined page file is sent synchronized to the data large screen. And rendering the display effect by the large screen according to the file. Therefore, operators can smoothly browse the current page to the target page, seamless connection of the visual objects is realized, and interactive visual page object navigation and positioning are realized.
Referring to fig. 2, a visualization component editing system for constructing a page according to an embodiment of the present application is provided, where the system includes: the system comprises an information acquisition module, a component matching module, a component editing module and a page generation module, wherein:
the information acquisition module is used for acquiring scene construction information of the target area;
The assembly matching module is used for creating an initial canvas page and determining a plurality of layers in the initial canvas page and editing parameters of a first visual assembly in each layer according to scene construction information;
the assembly editing module is used for responding to editing operation of an operator based on editing parameters to obtain a second visual assembly in each layer;
and the page generation module is used for generating a visual page according to the second visual component and the initial canvas page in each layer.
On the basis of the embodiment, the component matching module further comprises the step of acquiring the size and resolution of a visual window in the data large screen; an initial canvas page is generated based on the size and resolution of the visual window.
On the basis of the embodiment, the component matching module is further used for creating a first layer according to the urban map information and setting the first layer at a preset bottom layer position in the initial canvas page; creating a second layer according to the traffic flow information, and setting the second layer at a preset covering layer position in an initial canvas page; and creating a third layer according to the environment monitoring information, and setting the third layer at a preset top layer position in the initial canvas page.
On the basis of the embodiment, the component matching module further comprises an original visual component corresponding to the urban map information, the traffic flow information and the environment monitoring information according to a preset visual component library; respectively matching editable position ranges of the original visual components in each layer according to the first layer position, the second layer position, the third layer position and the component positions of the original visual components in each layer; and determining the component type of each original visual component, and respectively matching the editable size range of the original visual component in each layer according to the component type and the page size of the initial canvas page.
On the basis of the embodiment, the component editing module is further used for acquiring editing instructions corresponding to component dragging, component scaling, component color setting and component font proportion setting; according to the editing instruction, adjusting the original visual components in each layer to obtain each visual component; and if the component position of each visual component does not exceed the editable position range and the component size of each visual component does not exceed the editable size range, taking each visual component as a target visual component in each layer.
On the basis of the embodiment, the page generation module is further used for calling configuration data corresponding to the dynamic scene information according to a preset API; binding configuration data according to the component names of the target visual components in each layer to obtain target components in each layer; and generating a visual file according to each target component and the initial canvas page, and synchronizing the visual file to the data large screen to generate the visual page.
On the basis of the embodiment, the page generation module further comprises responding to the page calling operation of the operator to obtain the position of the calling object in the target area; and determining a target layer page according to the position, and synchronizing the target layer page and the visual page as a combined page to a data display screen.
It should be noted that: in the device provided in the above embodiment, when implementing the functions thereof, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be implemented by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the embodiments of the apparatus and the method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the embodiments of the method are detailed in the method embodiments, which are not repeated herein.
The application also discloses electronic equipment. Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to the disclosure in an embodiment of the present application. The electronic device 300 may include: at least one processor 301, at least one network interface 304, a user interface 303, a memory 305, at least one communication bus 302.
Wherein the communication bus 302 is used to enable connected communication between these components.
The user interface 303 may include a Display screen (Display) interface and a Camera (Camera) interface, and the optional user interface 303 may further include a standard wired interface and a standard wireless interface.
The network interface 304 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 301 may include one or more processing cores. The processor 301 utilizes various interfaces and lines to connect various portions of the overall server, perform various functions of the server and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 305, and invoking data stored in the memory 305. Alternatively, the processor 301 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 301 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem etc. The CPU mainly processes an operating system, a user interface diagram, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 301 and may be implemented by a single chip.
The Memory 305 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 305 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 305 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 305 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described respective method embodiments, etc.; the storage data area may store data or the like involved in the above respective method embodiments. Memory 305 may also optionally be at least one storage device located remotely from the aforementioned processor 301. Referring to fig. 3, an operating system, a network communication module, a user interface module, and an application program of a visual component editing method for constructing pages may be included in the memory 305 as a computer storage medium.
In the electronic device 300 shown in fig. 3, the user interface 303 is mainly used for providing an input interface for a user, and acquiring data input by the user; and processor 301 may be configured to invoke an application program in memory 305 for storing a visualization component editing method for building pages, which when executed by one or more processors 301, causes electronic device 300 to perform the method as in one or more of the embodiments described above. It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided herein, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned memory includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
The above are merely exemplary embodiments of the present disclosure and are not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure.
This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a scope and spirit of the disclosure being indicated by the claims.

Claims (10)

1. A method for editing a visualization component for building a page, comprising:
acquiring scene construction information of a target area, wherein the scene construction information comprises static scene information;
creating an initial canvas page, and generating a plurality of layers in the initial canvas page according to the static scene information, wherein each layer comprises an original visual component, and the original visual component is provided with an editable range;
responding to editing operation of an operator, and adjusting the original visual components in each layer in the editable range to generate target visual components in each layer;
and outputting the initial canvas page containing the target visualization component to a data large screen as a visualization page.
2. The visualization component editing method for building a page of claim 1, wherein the creating an initial canvas page comprises:
Obtaining the size and resolution of a visual window in the data large screen;
the initial canvas page is generated according to the size and the resolution of the visual window.
3. The visualization component editing method for building a page of claim 1, wherein the scene layout information comprises city map information, traffic flow information, and environment monitoring information, the layers comprising a first layer, a second layer, and a third layer, the generating a plurality of layers in the initial canvas page according to the static scene information comprising:
creating the first layer according to the urban map information, and setting the first layer at a preset bottom layer position in the initial canvas page;
creating the second layer according to the traffic flow information, and setting the second layer at a preset covering layer position in the initial canvas page;
and creating the third layer according to the environment monitoring information, and setting the third layer at a preset top layer position in the initial canvas page.
4. The visualization component editing method for building a page of claim 3, wherein the editable range comprises an editable position range and an editable size range, the generating a plurality of layers in the initial canvas page from the static scene information further comprising:
According to a preset visual component library, respectively matching the original visual components corresponding to the urban map information, the traffic flow information and the environment monitoring information;
according to the first layer position, the second layer position, the third layer position and the component positions of the original visual components in each layer, respectively matching the editable position ranges of the original visual components in each layer;
and determining the component type of each original visual component, and respectively matching the editable size range of the original visual component in each layer according to the component type and the page size of the initial canvas page.
5. The method for editing visual components for constructing a page according to claim 4, wherein the editing operation includes component dragging, component scaling, component color setting, and component font scaling, the adjusting the original visual components in each of the layers within the editable range in response to the editing operation by the operator, generating the target visual components in each of the layers, comprising:
acquiring editing instructions corresponding to the component dragging, the component scaling, the component color setting and the component font proportion setting;
According to the editing instruction, the original visual components in each layer are adjusted to obtain each visual component;
and if the component position of each visual component does not exceed the editable position range and the component size of each visual component does not exceed the editable size range, taking each visual component as the target visual component in each layer.
6. The visualization component editing method for building a page of claim 1, wherein the scene building information further comprises dynamic scene information, the outputting the initial canvas page containing the target visualization component as a visualization page to a data large screen, comprising:
calling configuration data corresponding to the dynamic scene information according to a preset API;
binding the configuration data according to the component names of the target visual components in each layer to obtain target components in each layer;
and generating a visual file according to each target component and the initial canvas page, and synchronizing the visual file to the data large screen to generate the visual page.
7. The method for editing a visualization component for building a page according to claim 1, wherein after the initial canvas page containing the target visualization component is output as a visualization page to a data large screen, further comprising:
Responding to the page calling operation of the operator, and acquiring the position of a calling object in the target area;
and determining a target layer page according to the position, and synchronizing the target layer page and the visual page to the data display screen as a combined page.
8. A visualization component editing system for building pages, the system comprising:
the information acquisition module is used for acquiring scene construction information of the target area;
the assembly matching module is used for creating an initial canvas page and determining a plurality of layers in the initial canvas page and editing parameters of a first visual assembly in each layer according to the scene construction information;
the assembly editing module is used for responding to the editing operation of an operator based on the editing parameters to obtain second visual assemblies in each layer;
and the page generation module is used for generating a visual page according to the second visual component and the initial canvas page in each layer.
9. An electronic device comprising a processor, a memory, a user interface and a network interface, the memory for storing instructions, the user interface and the network interface for communicating to other devices, the processor for executing the instructions stored in the memory to cause the electronic device to perform a method of editing a visual component for building a page according to any of claims 1-7.
10. A computer readable storage medium storing instructions which, when executed, perform a visual component editing method step for building pages as claimed in any one of claims 1 to 7.
CN202311796307.2A 2023-12-25 2023-12-25 Visual component editing method, system, equipment and medium for constructing page Active CN117891448B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311796307.2A CN117891448B (en) 2023-12-25 2023-12-25 Visual component editing method, system, equipment and medium for constructing page

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311796307.2A CN117891448B (en) 2023-12-25 2023-12-25 Visual component editing method, system, equipment and medium for constructing page

Publications (2)

Publication Number Publication Date
CN117891448A true CN117891448A (en) 2024-04-16
CN117891448B CN117891448B (en) 2024-07-26

Family

ID=90640373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311796307.2A Active CN117891448B (en) 2023-12-25 2023-12-25 Visual component editing method, system, equipment and medium for constructing page

Country Status (1)

Country Link
CN (1) CN117891448B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200410044A1 (en) * 2019-06-28 2020-12-31 Baidu Online Network Technology (Beijing) Co., Ltd. Visualized edition method, device and apparatus, and storage medium
CN112214238A (en) * 2020-10-15 2021-01-12 上海顺舟智能科技股份有限公司 Internet of things service system configuration method based on intelligent application scene
CN113656011A (en) * 2021-08-17 2021-11-16 广州新科佳都科技有限公司 Visual development system of track traffic net low code
CN115599364A (en) * 2022-10-09 2023-01-13 阿里巴巴(中国)有限公司(Cn) Configuration method, device and system of visual component
US20230033541A1 (en) * 2021-07-28 2023-02-02 International Business Machines Corporation Generating a visualization of data points returned in response to a query based on attributes of a display device and display screen to render the visualization
WO2023056903A1 (en) * 2021-10-08 2023-04-13 钉钉(中国)信息技术有限公司 Page building method, server, terminal, and storage medium
CN116069323A (en) * 2022-12-27 2023-05-05 重庆中信科信息技术有限公司 Method for dynamically constructing visual large screen in modularized mode

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200410044A1 (en) * 2019-06-28 2020-12-31 Baidu Online Network Technology (Beijing) Co., Ltd. Visualized edition method, device and apparatus, and storage medium
CN112214238A (en) * 2020-10-15 2021-01-12 上海顺舟智能科技股份有限公司 Internet of things service system configuration method based on intelligent application scene
US20230033541A1 (en) * 2021-07-28 2023-02-02 International Business Machines Corporation Generating a visualization of data points returned in response to a query based on attributes of a display device and display screen to render the visualization
CN113656011A (en) * 2021-08-17 2021-11-16 广州新科佳都科技有限公司 Visual development system of track traffic net low code
WO2023056903A1 (en) * 2021-10-08 2023-04-13 钉钉(中国)信息技术有限公司 Page building method, server, terminal, and storage medium
CN115599364A (en) * 2022-10-09 2023-01-13 阿里巴巴(中国)有限公司(Cn) Configuration method, device and system of visual component
CN116069323A (en) * 2022-12-27 2023-05-05 重庆中信科信息技术有限公司 Method for dynamically constructing visual large screen in modularized mode

Also Published As

Publication number Publication date
CN117891448B (en) 2024-07-26

Similar Documents

Publication Publication Date Title
CN109165401B (en) Method and device for generating two-dimensional construction map based on civil structure three-dimensional model
CN109460276A (en) The page and page configuration document generating method, device, terminal device and medium
CN104216691A (en) Application creating method and device
CN111752557A (en) Display method and device
JP7264989B2 (en) Visualization method, device and recording medium for multi-source earth observation image processing
CN111240669B (en) Interface generation method and device, electronic equipment and computer storage medium
CN114648615B (en) Method, device and equipment for controlling interactive reproduction of target object and storage medium
WO2017000898A1 (en) Software icon display method and apparatus
CN113535165A (en) Interface generation method and device, electronic equipment and computer readable storage medium
CN113821201A (en) Code development method and device, electronic equipment and storage medium
CN106846431B (en) Unified Web graph drawing system supporting multiple expression forms
CN117891448B (en) Visual component editing method, system, equipment and medium for constructing page
KR20120075626A (en) Apparatus and method for processing electric navigational chart in web-based service
WO2022228211A1 (en) Method and apparatus for constructing visual view
CN113254006B (en) Robot interaction method, system, device, electronic equipment and storage medium
WO2021135325A1 (en) Method and apparatus for gis point data rendering, computer device, and storage medium
CN116775174A (en) Processing method, device, equipment and medium based on user interface frame
CN114797109A (en) Object editing method and device, electronic equipment and storage medium
JP4968275B2 (en) Map data editing device and server for map data editing device
CN115115791A (en) Map editing method and device
CN115033226A (en) Page display method and device, terminal equipment and computer readable storage medium
CN115018975A (en) Data set generation method and device, electronic equipment and storage medium
CN114154095A (en) Page picture generation method, device, equipment and storage medium
CN114117161A (en) Display method and device
CN111737285B (en) Building query and labeling processing method and device based on geospatial analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant